Postulate: Everything that CAN happen, MUST happen.

295 views
Skip to first unread message

Alan Grayson

unread,
Feb 1, 2020, 10:48:42 PM2/1/20
to Everything List
Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

Brent Meeker

unread,
Feb 2, 2020, 1:42:12 AM2/2/20
to everyth...@googlegroups.com
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent



On 2/1/2020 7:48 PM, Alan Grayson wrote:
Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a1683342-69c3-4564-b18e-b3064f02e4c0%40googlegroups.com.

Philip Thrift

unread,
Feb 2, 2020, 2:17:17 AM2/2/20
to Everything List


On Saturday, February 1, 2020 at 9:48:42 PM UTC-6, Alan Grayson wrote:
Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

 Assume that quantum random number chips do just what they say they do:


Now suppose someone hooks all the nuclear missile silos to launch their missiles if the first 1000 digits  in the code for the binary expression of pi 


appear in sequence.

Then in MWI there is a world in which civilization is destroyed (or pretty much so).

@philipthrift

Alan Grayson

unread,
Feb 2, 2020, 6:32:10 AM2/2/20
to Everything List


On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


On 2/1/2020 7:48 PM, Alan Grayson wrote:
Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

smitra

unread,
Feb 2, 2020, 11:41:54 AM2/2/20
to everyth...@googlegroups.com
On 02-02-2020 04:48, Alan Grayson wrote:
> Can anyone offer a justification for this postulate, presumably at the
> heart of the MWI? Clark? AG
>

Anything that can happen does happen and we can exploit that fact when
we get electronic brains using the method explained here:

https://arxiv.org/abs/0902.3825

Saibal

smitra

unread,
Feb 2, 2020, 12:00:52 PM2/2/20
to everyth...@googlegroups.com
On 02-02-2020 12:32, Alan Grayson wrote:
> On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
>
>> First, it's false. You can make it true by interpreting "can
>> happen" to mean "can happen according the prediction of quantum
>> mechanics for this situation", but then it becomes trivial. Second,
>> it's not "at the heart of MWI"; the trivial version is all that MWI
>> implies. Read the first few paragraphs of this paper:
>>
>> arXiv:quant-ph/0702121v1 13 Feb 2007
>>
>> Brent
>
> In posing the question, I want to give its advocates such as Clark the
> opportunity to justify the postulate. It goes way beyond the MWI and
> QM. E.g., it means that if someone puts on his/her right shoe first
> this morning, there must be a universe in which a copy of the person
> puts on his/her left shoe first. It seems way, way over the top, but
> oddly many embrace it with gusto. AG
>
>> On 2/1/2020 7:48 PM, Alan Grayson wrote:

That copy then did something differently and will then diverge and
become a different person. The reverse is also true. Two arbitrary
different persons have always branched off from an identical set of
copies in the past. Take e.g. you and someone who lived 50,000 years ago
in the ice age.

When you were born, you would not have immediately known about the
modern world. You could just have been born into an ice age community
living in some cave. So, your mind would have been identical to that of
a baby born into such an ice age community. The split between the ice
age baby and the modern baby would have occurred quite some time after
birth.

You would even have been identical to a T-Rex embryo if you go a bit
before birth. All conscious agents were identical to each other if you
go back to close enough to the point where you came into existence, as
you started out with zero information.

Saibal

Lawrence Crowell

unread,
Feb 2, 2020, 2:21:32 PM2/2/20
to Everything List
We probably can't ever know. What we call physical laws are local. Ultimately if there is only one global law that there is no law, and what is law according to symmetry = conservation principle is local then we are bounded from global observation by information or epistemic horizons. 

LC

On Saturday, February 1, 2020 at 9:48:42 PM UTC-6, Alan Grayson wrote:

John Clark

unread,
Feb 2, 2020, 6:00:30 PM2/2/20
to everyth...@googlegroups.com
On Sun, Feb 2, 2020 at 1:42 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial. 

I suppose "trivial" is a pretty subjective word but the difference between the following seem pretty non-trivial to me, at least in my purely subjective opinion:

1) Things are inherently random, Brent Meeker might see the electron go right or go left but it will only do one thing it's just that there is no way for Brent Meeker to predict which.

2) Things are inherently deterministic, the electron will go both left and right and Brent Meeker will see both.

The words "Brent Meeker" being defined as being the man who remembers being Brent Meeker yesterday.

John K Clark

Brent Meeker

unread,
Feb 3, 2020, 12:02:54 AM2/3/20
to everyth...@googlegroups.com
And they will all be equal when they are dead.  But I don't think that
implies some continuum connecting them.

Brent

Brent Meeker

unread,
Feb 3, 2020, 1:37:49 AM2/3/20
to everyth...@googlegroups.com


On 2/2/2020 2:59 PM, John Clark wrote:
On Sun, Feb 2, 2020 at 1:42 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

> You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial. 

I suppose "trivial" is a pretty subjective word but the difference between the following seem pretty non-trivial to me, at least in my purely subjective opinion:

1) Things are inherently random, Brent Meeker might see the electron go right or go left but it will only do one thing it's just that there is no way for Brent Meeker to predict which.

2) Things are inherently deterministic, the electron will go both left and right and Brent Meeker will see both.

That anything = "one of two possible paths" makes it trivial that "anything can happen".  It rules out infinitely many things that one might otherwise consider included in anything.

Brent


The words "Brent Meeker" being defined as being the man who remembers being Brent Meeker yesterday.

John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Feb 3, 2020, 10:45:30 AM2/3/20
to everyth...@googlegroups.com
On 2 Feb 2020, at 04:48, Alan Grayson <agrays...@gmail.com> wrote:

Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

It is a direct consequence of the assumption of mechanism, where physics emerge from all computations (in arithmetic). So, you don’t need to postulate a physical world to understand that all consistent computations “happens”, and Everett formulation of QM can be seen as a conformation of the mechanist theory of mind.

A key point in both Mechanism and Everett is that although all computations “happens”, they do not happen relatively to each other with the same relative probabilities. That can be used to refute some “ethical critics” that some people use against Everett.

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Feb 3, 2020, 10:48:50 AM2/3/20
to everyth...@googlegroups.com
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:



On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense, both with Mechanism (the many-worlds interpretation of arithmetic) and with Everett (the many-worlds formulation of QM). Thinking is presumably classical so when you take decision, you take the same decision in all worlds, with rare exceptions.

Bruno







On 2/1/2020 7:48 PM, Alan Grayson wrote:
Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a1683342-69c3-4564-b18e-b3064f02e4c0%40googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/81e65c04-c3c6-4cd4-97c0-c6aa370798b0%40googlegroups.com.

Alan Grayson

unread,
Feb 3, 2020, 11:43:12 AM2/3/20
to Everything List


On Monday, February 3, 2020 at 8:45:30 AM UTC-7, Bruno Marchal wrote:

On 2 Feb 2020, at 04:48, Alan Grayson <agrays...@gmail.com> wrote:

Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

It is a direct consequence of the assumption of mechanism, where physics emerge from all computations (in arithmetic). So, you don’t need to postulate a physical world to understand that all consistent computations “happens”, and Everett formulation of QM can be seen as a conformation of the mechanist theory of mind.

A key point in both Mechanism and Everett is that although all computations “happens”, they do not happen relatively to each other with the same relative probabilities. That can be used to refute some “ethical critics” that some people use against Everett.

Bruno

So if I put on my left shoe first today, there's must be a universe in which I put on my right shoe first. And then there are the many ways I can tie my shoe laces, each resulting in more universes. This is about the most ridiculous model conceivable, don't ya think? Put another way, this is just plain dumb. AG 




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Bruce Kellett

unread,
Feb 3, 2020, 4:46:34 PM2/3/20
to everyth...@googlegroups.com
On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts. Everything that happens happens with probability one. All possible outcomes occur with unit probability in any interaction/experiment. David Albert makes the very good point that in your W/M duplication scenario, for example, no first person probabilities for potential outcomes can be defined.
both with Mechanism (the many-worlds interpretation of arithmetic) and with Everett (the many-worlds formulation of QM). Thinking is presumably classical so when you take decision, you take the same decision in all worlds, with rare exceptions.

Only if it is the same person in all those worlds. Different people make different (classical) decisions.

Bruce

Bruno Marchal

unread,
Feb 4, 2020, 8:06:05 AM2/4/20
to everyth...@googlegroups.com
On 3 Feb 2020, at 17:43, Alan Grayson <agrays...@gmail.com> wrote:



On Monday, February 3, 2020 at 8:45:30 AM UTC-7, Bruno Marchal wrote:

On 2 Feb 2020, at 04:48, Alan Grayson <agrays...@gmail.com> wrote:

Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

It is a direct consequence of the assumption of mechanism, where physics emerge from all computations (in arithmetic). So, you don’t need to postulate a physical world to understand that all consistent computations “happens”, and Everett formulation of QM can be seen as a conformation of the mechanist theory of mind.

A key point in both Mechanism and Everett is that although all computations “happens”, they do not happen relatively to each other with the same relative probabilities. That can be used to refute some “ethical critics” that some people use against Everett.

Bruno

So if I put on my left shoe first today, there's must be a universe in which I put on my right shoe first. And then there are the many ways I can tie my shoe laces, each resulting in more universes. This is about the most ridiculous model conceivable, don't ya think? Put another way, this is just plain dumb. AG 

The fact is that all those computations are executed in all models of elementary arithmetic. To deny them, you need to either deny that 2+2=4 or deny the Church-Turing thesis.

And quantum mechanics without collapse, or just quantum field theory, confirms this, by providing evidence that if we want the exact decimal for the probability o find an electron at B when he starts at A, we have to sum on all path that the electron can take for going from A to B.

You have to assume a non-mechanist theory of mind to sustain your ontological commitment in a primitively physical universe.

Bruno 









--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a1683342-69c3-4564-b18e-b3064f02e4c0%40googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5fefeccf-64dd-49a1-ad6f-842fb0fad192%40googlegroups.com.

Bruno Marchal

unread,
Feb 4, 2020, 8:13:14 AM2/4/20
to everyth...@googlegroups.com
On 3 Feb 2020, at 22:46, Bruce Kellett <bhkel...@gmail.com> wrote:

On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts.

If that was the case, I don’t think we would still be here discussing Everett. 




Everything that happens happens with probability one.

Everett insists, perhaps wrongly (but then that is what should be debated) that he recovers the usual quantum statistics, where the probability is given by the square of the amplitude of the wave. 







All possible outcomes occur with unit probability in any interaction/experiment. David Albert makes the very good point that in your W/M duplication scenario, for example, no first person probabilities for potential outcomes can be defined.

Where? In his book “quantum mechanics and experience”? Albert has clearly not understand Everett Imo.

Can you do this point here?






both with Mechanism (the many-worlds interpretation of arithmetic) and with Everett (the many-worlds formulation of QM). Thinking is presumably classical so when you take decision, you take the same decision in all worlds, with rare exceptions.

Only if it is the same person in all those worlds.

But there are the same by definition, given that that they are supposed to be the same above the substitution level by construction. Keep in mind that I am reasoning in the frame of the Mechanist hypothesis, like Darwin, Descartes, but revised by the digital Church-Turing thesis.



Different people make different (classical) decisions.

No problem. In the case above, those are not different people. They are numerically identical at the right level substitution per definition, which makes sense with the digital mechanist hypothesis.

Bruno





Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Feb 4, 2020, 12:08:41 PM2/4/20
to Everything List


On Tuesday, February 4, 2020 at 6:06:05 AM UTC-7, Bruno Marchal wrote:

On 3 Feb 2020, at 17:43, Alan Grayson <agrays...@gmail.com> wrote:



On Monday, February 3, 2020 at 8:45:30 AM UTC-7, Bruno Marchal wrote:

On 2 Feb 2020, at 04:48, Alan Grayson <agrays...@gmail.com> wrote:

Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

It is a direct consequence of the assumption of mechanism, where physics emerge from all computations (in arithmetic). So, you don’t need to postulate a physical world to understand that all consistent computations “happens”, and Everett formulation of QM can be seen as a conformation of the mechanist theory of mind.

A key point in both Mechanism and Everett is that although all computations “happens”, they do not happen relatively to each other with the same relative probabilities. That can be used to refute some “ethical critics” that some people use against Everett.

Bruno

So if I put on my left shoe first today, there's must be a universe in which I put on my right shoe first. And then there are the many ways I can tie my shoe laces, each resulting in more universes. This is about the most ridiculous model conceivable, don't ya think? Put another way, this is just plain dumb. AG 

The fact is that all those computations are executed in all models of elementary arithmetic. To deny them, you need to either deny that 2+2=4 or deny the Church-Turing thesis.

And quantum mechanics without collapse, or just quantum field theory, confirms this, by providing evidence that if we want the exact decimal for the probability o find an electron at B when he starts at A, we have to sum on all path that the electron can take for going from A to B.

You have to assume a non-mechanist theory of mind to sustain your ontological commitment in a primitively physical universe.

Bruno 

It doesn't pass the smell test. AG 









--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a1683342-69c3-4564-b18e-b3064f02e4c0%40googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Bruce Kellett

unread,
Feb 4, 2020, 5:13:32 PM2/4/20
to everyth...@googlegroups.com
On Wed, Feb 5, 2020 at 12:13 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 3 Feb 2020, at 22:46, Bruce Kellett <bhkel...@gmail.com> wrote:
On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts.

If that was the case, I don’t think we would still be here discussing Everett. 

Everything that happens happens with probability one.

Everett insists, perhaps wrongly (but then that is what should be debated) that he recovers the usual quantum statistics, where the probability is given by the square of the amplitude of the wave. 

It turns out, in fact, that Everett did not prove this result. As in conventional QM, he just asserted it.
All possible outcomes occur with unit probability in any interaction/experiment. David Albert makes the very good point that in your W/M duplication scenario, for example, no first person probabilities for potential outcomes can be defined.

Where? In his book “quantum mechanics and experience”? Albert has clearly not understand Everett Imo.

Can you do this point here?

I have read a lot of Albert's more recent work, and I can't remember exactly where he makes this point. I expect it was in a Podcast discussion with Sean Carroll:


The problem is that this is nearly two hours long, and I haven't time to listen to the whole thing again. He talks in detail about probability in Everett about half way through this discussion.

The basic argument is that people use symmetry arguments and the like to claim that the probabilities for H-man to end up in M or W are each equal to one-half. Albert points out that the same symmetries are respected by the claim that H-man has no idea where he will end up -- he cannot assign probabilities to the separate outcomes since each occurs with probability one.


both with Mechanism (the many-worlds interpretation of arithmetic) and with Everett (the many-worlds formulation of QM). Thinking is presumably classical so when you take decision, you take the same decision in all worlds, with rare exceptions.

Only if it is the same person in all those worlds.

But there are the same by definition, given that that they are supposed to be the same above the substitution level by construction. Keep in mind that I am reasoning in the frame of the Mechanist hypothesis, like Darwin, Descartes, but revised by the digital Church-Turing thesis.

The trouble with all these arguments that you make is that you move away from quantum mechanics and argue in terms of you mechanism. That is OK for you, but it says nothing about the actual question at issue, which is what QM predicts about these situations.

Bruce

Bruno Marchal

unread,
Feb 6, 2020, 8:26:52 AM2/6/20
to everyth...@googlegroups.com
On 4 Feb 2020, at 18:08, Alan Grayson <agrays...@gmail.com> wrote:



On Tuesday, February 4, 2020 at 6:06:05 AM UTC-7, Bruno Marchal wrote:

On 3 Feb 2020, at 17:43, Alan Grayson <agrays...@gmail.com> wrote:



On Monday, February 3, 2020 at 8:45:30 AM UTC-7, Bruno Marchal wrote:

On 2 Feb 2020, at 04:48, Alan Grayson <agrays...@gmail.com> wrote:

Can anyone offer a justification for this postulate, presumably at the heart of the MWI? Clark? AG

It is a direct consequence of the assumption of mechanism, where physics emerge from all computations (in arithmetic). So, you don’t need to postulate a physical world to understand that all consistent computations “happens”, and Everett formulation of QM can be seen as a conformation of the mechanist theory of mind.

A key point in both Mechanism and Everett is that although all computations “happens”, they do not happen relatively to each other with the same relative probabilities. That can be used to refute some “ethical critics” that some people use against Everett.

Bruno

So if I put on my left shoe first today, there's must be a universe in which I put on my right shoe first. And then there are the many ways I can tie my shoe laces, each resulting in more universes. This is about the most ridiculous model conceivable, don't ya think? Put another way, this is just plain dumb. AG 

The fact is that all those computations are executed in all models of elementary arithmetic. To deny them, you need to either deny that 2+2=4 or deny the Church-Turing thesis.

And quantum mechanics without collapse, or just quantum field theory, confirms this, by providing evidence that if we want the exact decimal for the probability o find an electron at B when he starts at A, we have to sum on all path that the electron can take for going from A to B.

You have to assume a non-mechanist theory of mind to sustain your ontological commitment in a primitively physical universe.

Bruno 

It doesn't pass the smell test. AG 


You need to make more specific comment. The smell test is not a criterion in science, especially in counter-intuive fundamental science. We can rely only to the facts, and the facts are that the arithmetical reality execute all computations, that a universal machine cannot distinguish a computation from a computation + an ontological commitment, and that this imposed to recover the physical laws from a statistic on all computations executed in arithmetic. To avoid this, you need to abandon the Mechanist hypothesis, which asks to abandon Descartes, Darwin, but also any current theories in physics, which all implies or use mechanism. 
Mechanism pass well the experimental test, and provided the only known non-magical theory of consciousness and qualia, and quanta.
It is not something which should replace physics, like an idea that some people want to attribute to me. But it is something which replace physicalism in metaphysics. 

You need to study my older posts, or to read at the least Davis’ Dover book “computability and unsolvability” up to the chapter 4 included, which shows how to “arithmetize” (represent faithfully in arithmetic) computer science and metamathematics. My work is build on this.

Bruno














--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a1683342-69c3-4564-b18e-b3064f02e4c0%40googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5fefeccf-64dd-49a1-ad6f-842fb0fad192%40googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1a371d21-25dc-442d-a778-16e6d219206d%40googlegroups.com.

Bruno Marchal

unread,
Feb 6, 2020, 8:45:37 AM2/6/20
to everyth...@googlegroups.com
On 4 Feb 2020, at 23:13, Bruce Kellett <bhkel...@gmail.com> wrote:

On Wed, Feb 5, 2020 at 12:13 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 3 Feb 2020, at 22:46, Bruce Kellett <bhkel...@gmail.com> wrote:
On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts.

If that was the case, I don’t think we would still be here discussing Everett. 

Everything that happens happens with probability one.

Everett insists, perhaps wrongly (but then that is what should be debated) that he recovers the usual quantum statistics, where the probability is given by the square of the amplitude of the wave. 

It turns out, in fact, that Everett did not prove this result. As in conventional QM, he just asserted it.


He provides argument, which actually were already found by Paulette Février-destouche in France 20 years before Everett, and correspond more or less to the argument made by Graham in the selected paper by DeWitt and Graham on the MWI, and by Preskill in his textbook in Quantum Mechanics.
Is that argument totally convincing? Perhaps not, but let us say that I think it is improvable, and it is going in the direction that we can expect when postulating Mechanism (as do Everett, and many others, consciously or unconsciously). 





All possible outcomes occur with unit probability in any interaction/experiment. David Albert makes the very good point that in your W/M duplication scenario, for example, no first person probabilities for potential outcomes can be defined.

Where? In his book “quantum mechanics and experience”? Albert has clearly not understand Everett Imo.

Can you do this point here?

I have read a lot of Albert's more recent work, and I can't remember exactly where he makes this point. I expect it was in a Podcast discussion with Sean Carroll:


The problem is that this is nearly two hours long, and I haven't time to listen to the whole thing again. He talks in detail about probability in Everett about half way through this discussion.


Of course we agree that if a theory predicts that all outcome comes with probability one, mechanism, but also nature refutes this fact, but neither QM-copenhague nor QM-everett claims this, unless they are talking about the 3p picture, in which there are no probabilities. What Everett call “subjective probabilities” is equivalent, in the mechanist setting, with the first person indeterminacy. I suspect that taken out of context, Everett might have looked like saying that all outcome have probability one, but in his texts he made clear when he talk about a result being accessible to an observer, and the universal wave as seen from “outside” (that is the mathematical 3p solution of the wave equation).





The basic argument is that people use symmetry arguments and the like to claim that the probabilities for H-man to end up in M or W are each equal to one-half. Albert points out that the same symmetries are respected by the claim that H-man has no idea where he will end up -- he cannot assign probabilities to the separate outcomes since each occurs with probability one.

Which is refuted by all persons after the experience, as both the M and W men, who remember rightly to have been the H-guy, can only write the name of the once city they see after having push the button. So indeed the idea that all outcome have probability one is the 3p description of the protocole of the experience. I recall that the question is not about where the guy will be from a 3p view, but where the guy will feel himself, given that he survives (vy mechanism) and know that he can survive only with a first person experience of effacing to be in only once city (as that is true for both copies). We can come back on this with more details if something still seems mysterious to you.  Iy works as well with robots in place of the humans: the whole thing can be described in a pure 3p way.







both with Mechanism (the many-worlds interpretation of arithmetic) and with Everett (the many-worlds formulation of QM). Thinking is presumably classical so when you take decision, you take the same decision in all worlds, with rare exceptions.

Only if it is the same person in all those worlds.

But there are the same by definition, given that that they are supposed to be the same above the substitution level by construction. Keep in mind that I am reasoning in the frame of the Mechanist hypothesis, like Darwin, Descartes, but revised by the digital Church-Turing thesis.

The trouble with all these arguments that you make is that you move away from quantum mechanics and argue in terms of you mechanism.


Everett requires Mechanism, and what Everett missed is that once you assume mechanism, not only you need to recover the collapse of the wave in the first person discourse, but you have to recover the wave itself, and thus quantum mechanics. 



That is OK for you, but it says nothing about the actual question at issue, which is what QM predicts about these situations.


QM needs a precise theory of mind. Everett use Mechanism, but fail to see all the consequences of that move.
Now, with Copenhagen you can invoke a non mechanist theory of mind, but that is a bit like invoking God when we lack understanding. Today, there are simply no non-mechanist theory of mind, other than untestable fairy tales imposed by tyran and manipulators since long.

Bruno





Bruce


Different people make different (classical) decisions.

No problem. In the case above, those are not different people. They are numerically identical at the right level substitution per definition, which makes sense with the digital mechanist hypothesis.

Bruno

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 6, 2020, 11:59:27 PM2/6/20
to everything list
From: Bruno Marchal <mar...@ulb.ac.be>
Date: Fri, Feb 7, 2020 at 12:45 AM
Subject: Re: Postulate: Everything that CAN happen, MUST happen.
To: <everyth...@googlegroups.com>



On 4 Feb 2020, at 23:13, Bruce Kellett <bhkel...@gmail.com> wrote:

On Wed, Feb 5, 2020 at 12:13 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 3 Feb 2020, at 22:46, Bruce Kellett <bhkel...@gmail.com> wrote:
On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts.

If that was the case, I don’t think we would still be here discussing Everett. 

Everything that happens happens with probability one.

Everett insists, perhaps wrongly (but then that is what should be debated) that he recovers the usual quantum statistics, where the probability is given by the square of the amplitude of the wave. 

It turns out, in fact, that Everett did not prove this result. As in conventional QM, he just asserted it.


He provides argument, which actually were already found by Paulette Février-destouche in France 20 years before Everett, and correspond more or less to the argument made by Graham in the selected paper by DeWitt and Graham on the MWI, and by Preskill in his textbook in Quantum Mechanics.
Is that argument totally convincing? Perhaps not, but let us say that I think it is improvable, and it is going in the direction that we can expect when postulating Mechanism (as do Everett, and many others, consciously or unconsciously).


Everett's argument is far from convincing. It is criticized by Simon Saunders in the book "Many Worlds?: Everett, Quantum Theory, & Reality", and by David Wallace in his book on "The Emergent Multiverse". Perhaps the most telling critique of Everett's idea has been given by Adrian Kent in his contribution to the book, cited above, that he edited with Simon Saunders and David Wallace. I give extensive quotations below, and attach a pdf with these comments in a more friendly format. Note that Kent's critique also undermines any idea that you can attach probabilities to outcomes in your W/M duplication scenarios in Step 3.


Born Rule in Everettian Many Worlds Theory

Everett gives an argument for the Born rule in his 1957 paper. Simon Saunders (in his introduction to the volume of essays: "Many Worlds?: Everett, Quantum Theory, & Reality", OUP 2010) gives the following summary of Everett's argument:

"But Everett was able to derive at least a fragment of the Born rule. Given that the measure over the space of branches is a function of the branch amplitudes, the question arises: What function? If the measure is to be additive, so that the measure of a sum of branches is the sum of their measures, it follows that it is the modulus square---that was something. The set of branches, complete with additive measure, then constitute a probability space. As such, version of the Bernouilli and other large number theorems can be derived. They imply that the measure of all the branches exhibiting anomalous statistics (with respect to this measure) is small when the number of trials is sufficiently large, and goes to zero in the limit---that was something more."

This account can be criticized on several grounds. Firstly, it relies on the limit of infinitely many trials, whereas in practice, we only ever have a finite number of such trials. Another criticism is that there is not any solid basis for the assumption that the measure should depend only on the branch weights---why should it not depend on the actual structure of the branches themselves? The other main line of objection relates to the simple application of Everett's rule in the case where all possible outcomes occur on each trial. In that case, all possible sequences of results occur, so that predictions using this rule would have been wildly contradicted by the emperical evidence---which only goes to show that the Born Rule, far from being an obvious consequence of the interpretation of the quantum state in terms of many worlds, appears quite unreasonable.


This latter point is made very strongly by Adrian Kent in his contribution to the above cited volume of collected essays (pp. 307--354).

Kent considers a toy multiverse, which is classical, but in which branches are multiplied to record all possible results. The first such world he considers includes conscious inhabitants, but which also includes a machine with a red button on it, and a tape emerging from it, with a sequence of numbers on it, all in the range 0 to (N-1). When the red button is pressed in some universe within the multiverse, that universe is deleted, and N successor universes are then created. All the successors are in the same classical state as the original (and so, by hypothesis, all include conscious inhabitants with the same memories as those who have just been deleted), except that a new number has been written onto the end of the tape, with the number 'i' being written in the 'i'-th successor universe.

Suppose, further, that some of the inhabitants of this multiverse have acquired the theoretical idea that the laws of their multiverse might attach 'weights' to branches, i.e., a number p_i is attached to branch 'i', where p_i >= 0 and Sum_i p_i = 1. They might have various different  theories about how these weights are defined.... To be clear: this is not to say that the branches have equal weight. Nor are they necessarily physically identical, aside from the tape numbers. However, any such differences do not yield any natural quantitative definition of branch weights. There is just no fact of the matter about branch weights in this multiverse.

Kent goes on the say:

"Everettian quantum theory is essentially useless, as a scientific theory, unless it can explain the data that confirm the validity of the Copenhagen quantum theory within its domain---unless, for example, it can explain why we should expect to observe the Born rule to have been very well confirmed statistically. Evidently, Everettians cannot give an explanation that says that all observers in the multiverse will observe confirmation of the Born rule, or that very probably all observers will observe confirmation of the Born rule. On the contrary, many observers in an Everettian multiverse will definitely observe convincing 'discomfirmation' of the Born rule.

"It suffices to consider very simple many-worlds theories, containing classical branching worlds in which the branches correspond to binary outcomes of definite experiments. Consider thus the 'weightless multiverse', a many-worlds of the type outlined above, in which the machine produces only two possible outcomes, writing 0 or 1 onto the tape. Suppose now that the inhabitants begin a series of experiments in which they push the red button on the machine a large number, N, times, at regular intervals. Suppose too that the inhabitants believe (correctly) that this is a series of independent identical experiments, and moreover believe this 'dogmatically': no pattern in the data will shake their faith. Suppose also that they believe (incorrectly) that their multiverse is governed by a many-worlds theory with unknown weights attached to the 0 and 1 outcomes; identical in each trial, and seek to discover the (actually non-existent) values of these weights.

"After N trials, the multiverse contains 2^N branches, corresponding to all 2^N possible binary string outcomes. The inhabitants on a string with pN zero and (1 - p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight 'p' is attached to zero outcome branches and weight (1 - p) is attached to one outcome branches. In other words, everyone, no matter what string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights.

"Let's consider further the perspective of inhabitants on a branch with 'pN' zero outcomes and '(1 - p)N' one outcomes. They do not have the delusion that all observed strings have the same relative frequency as theirs: they understand that, given the hypothesis that they live in a multiverse, 'every' binary string, and hence every relative frequency, will have been observed by someone. So how do they conclude that the theory that the weights are '(p,1 - p)' has nonetheless been confirmed?. Because they have concluded that the weights measure the 'importance' of the branches for theory confimation. Since they believe they have learned that the weights are '(p,1 - p)', they conclude that a branch with 'r' zeros and '(N - r)' ones has importance p^r(1 - p)^{N-r}. Summing over all branches with 'pN' zeros and '(1 - p)N' ones, or very close to those frequencies, thus gives a set of total importance very close to 1; the remaining branches have total importance very close to zero. So, on the set of branches that dominate the importance measure, the theory that the weights are (very close to) (p,1 - p) is indeed correct. All is well! By definition, the important branches are the ones that matter for theory confimation. The theory is inded confirmed!

"The problem, of course, is that this reasoning applies equally well for all the inhabitants, whatever relative frequency 'p' they see on their branch. All of them conclude that their relative frequencies represent (to very good approximation) the branching weights. All of them conclude that their own branches, together with those with identical or similar relative frequencies, are the important ones for theory confirmation. All of them thus happily conclude that their theories have been confirmed. And, recall, all of them are wrong: there are actually no branching weights."


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce

Born.rule.pdf

Lawrence Crowell

unread,
Feb 7, 2020, 5:54:06 AM2/7/20
to Everything List
This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure. An observer might see their branch as having sufficient randomness to be a Bayesian prior, but to derive a full theory these outlier branches with the appearance of structure have to be eliminated. This is not a devastating blow to MWI, but it is a limitation on its explanatory power. Of course with statistical physics we have these logarithms and the rest and such slop tends to be "washed out" for large enough sample space. 

No matter how hard we try it is tough to make this all epistemic, say Bayesian etc, or ontological with frequentist statistics. 

LC 

Bruce Kellett

unread,
Feb 7, 2020, 6:07:41 AM2/7/20
to everyth...@googlegroups.com
I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

Bruce

Lawrence Crowell

unread,
Feb 7, 2020, 6:59:39 AM2/7/20
to Everything List
On Friday, February 7, 2020 at 5:07:41 AM UTC-6, Bruce wrote:
On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

Bruce

This in reference to what is the distribution of outcomes, all one can do is to use a Boltzmannian argument for e^{-E/kT} for 1/T a Euclideanized time. If you want to get fancy you can use Bose-Einstein or Fermi-Dirac. So this is somewhat model dependent, but not hopeless. For multiverse connections to MWI the energy is the energy-mass gap from the inflationary false vacuum to zero or maybe the observable vacuum based on the CC. This is again somewhat phenom-dependent and a bit hand wavy, but not hopeless. 

I don't think MWI is that much worse than other interpretations. In fact I tend to see it as better than most. 

LC

Philip Thrift

unread,
Feb 7, 2020, 10:47:35 AM2/7/20
to Everything List


On Friday, February 7, 2020 at 5:59:39 AM UTC-6, Lawrence Crowell wrote:

I don't think MWI is that much worse than other interpretations. In fact I tend to see it as better than most. 

LC

 


It is sad (to me) to think that 100 years from now there will be any MWI adherents - except as some curious  cult.  

Sean Carroll promotes on his Twiiter (I follow him just to see what nutty thing he says) that he looks forward to the day where all physicists are Mad-Dog Everettians.

Mad-Dog Everettianismhttps://arxiv.org/abs/1801.08132

It is not only a rabbit hole, it is a cult that has taken over physicists (a lot of them anyway).

@philipthrift

Lawrence Crowell

unread,
Feb 7, 2020, 12:09:31 PM2/7/20
to Everything List
MWI is not that bad. All quantum interpretations have some negative qualities. I think all quantum interpretations are auxiliary postulates not provable in QM.

Stathis Papaioannou

unread,
Feb 7, 2020, 12:33:29 PM2/7/20
to everyth...@googlegroups.com
Nevertheless Many Worlds is at least logically possible. What would the inhabitants expect to see, if not the world we currently see?
--
Stathis Papaioannou

Bruno Marchal

unread,
Feb 7, 2020, 1:23:37 PM2/7/20
to everyth...@googlegroups.com
On 7 Feb 2020, at 05:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:

From: Bruno Marchal <mar...@ulb.ac.be>
Date: Fri, Feb 7, 2020 at 12:45 AM
Subject: Re: Postulate: Everything that CAN happen, MUST happen.
To: <everyth...@googlegroups.com>



On 4 Feb 2020, at 23:13, Bruce Kellett <bhkel...@gmail.com> wrote:

On Wed, Feb 5, 2020 at 12:13 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 3 Feb 2020, at 22:46, Bruce Kellett <bhkel...@gmail.com> wrote:
On Tue, Feb 4, 2020 at 2:48 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 2 Feb 2020, at 12:32, Alan Grayson <agrays...@gmail.com> wrote:
On Saturday, February 1, 2020 at 11:42:12 PM UTC-7, Brent wrote:
First, it's false.  You can make it true by interpreting "can happen" to mean "can happen according the prediction of quantum mechanics for this situation", but then it becomes trivial.  Second, it's not "at the heart of MWI"; the trivial version is all that MWI implies.  Read the first few paragraphs of this paper:

arXiv:quant-ph/0702121v1 13 Feb 2007

Brent

In posing the question, I want to give its advocates such as Clark the opportunity to justify the postulate. It goes way beyond the MWI and QM. E.g., it means that if someone puts on his/her right shoe first this morning, there must be a universe in which a copy of the person puts on his/her left shoe first. It seems way, way over the top, but oddly many embrace it with gusto. AG 


That is already completely different, as it seems to say that everything happen with the same probability, but that is non sense,

No, it is exactly what Everett predicts.

If that was the case, I don’t think we would still be here discussing Everett. 

Everything that happens happens with probability one.

Everett insists, perhaps wrongly (but then that is what should be debated) that he recovers the usual quantum statistics, where the probability is given by the square of the amplitude of the wave. 

It turns out, in fact, that Everett did not prove this result. As in conventional QM, he just asserted it.


He provides argument, which actually were already found by Paulette Février-destouche in France 20 years before Everett, and correspond more or less to the argument made by Graham in the selected paper by DeWitt and Graham on the MWI, and by Preskill in his textbook in Quantum Mechanics.
Is that argument totally convincing? Perhaps not, but let us say that I think it is improvable, and it is going in the direction that we can expect when postulating Mechanism (as do Everett, and many others, consciously or unconsciously).


Everett's argument is far from convincing. It is criticized by Simon Saunders in the book "Many Worlds?: Everett, Quantum Theory, & Reality", and by David Wallace in his book on "The Emergent Multiverse". Perhaps the most telling critique of Everett's idea has been given by Adrian Kent in his contribution to the book, cited above, that he edited with Simon Saunders and David Wallace. I give extensive quotations below, and attach a pdf with these comments in a more friendly format. Note that Kent's critique also undermines any idea that you can attach probabilities to outcomes in your W/M duplication scenarios in Step 3.


That explains probably why the papers by Kent (a guy who criticise Everett since long) never convince me.
Toi be sure, I have stopped to read its last publication. Wallace is more convincing, but in the last reading he defended Everett, but still in a way which somehow presuppose an ontological physical universe (an hypothesis which I cannot use in my “mind-body” context.




Born Rule in Everettian Many Worlds Theory

Everett gives an argument for the Born rule in his 1957 paper. Simon Saunders (in his introduction to the volume of essays: "Many Worlds?: Everett, Quantum Theory, & Reality", OUP 2010) gives the following summary of Everett's argument:

"But Everett was able to derive at least a fragment of the Born rule. Given that the measure over the space of branches is a function of the branch amplitudes, the question arises: What function? If the measure is to be additive, so that the measure of a sum of branches is the sum of their measures, it follows that it is the modulus square---that was something. The set of branches, complete with additive measure, then constitute a probability space. As such, version of the Bernouilli and other large number theorems can be derived. They imply that the measure of all the branches exhibiting anomalous statistics (with respect to this measure) is small when the number of trials is sufficiently large, and goes to zero in the limit---that was something more.”


So I agree with Saunders here, and its indexical view of time is also coherent with Mechanism (my working frame).



This account can be criticized on several grounds. Firstly, it relies on the limit of infinitely many trials, whereas in practice, we only ever have a finite number of such trials.


And this is of course wrong when you assume mechanism. By the invariance of the first person for the number of step to get the reconstitution, in the Universal Dovetailing, the physical reality comes from a sort of limit on an infinity of projection from all computations to one first person indexical states. So the whole of physics is given by a limiting process.




Another criticism is that there is not any solid basis for the assumption that the measure should depend only on the branch weights---why should it not depend on the actual structure of the branches themselves?

The quantum measure must depend on the relative weight, and then subsequent relative measure can depend on many things, including the intention of the observer. The idea is that the quantum probabilities are part of physics and true “everywhere”. This consists just in assuming QM being true/correct.


The other main line of objection relates to the simple application of Everett's rule in the case where all possible outcomes occur on each trial. In that case, all possible sequences of results occur, so that predictions using this rule would have been wildly contradicted by the emperical evidence---which only goes to show that the Born Rule, far from being an obvious consequence of the interpretation of the quantum state in terms of many worlds, appears quite unreasonable.

That is what motivated Graham (and Preskill) to use the limite that you describe above. Then the Born rules follows almost by the theorem of Pythagoras, as Paulette Destouches-Février saw in Paris right at the beginning of the Quantum theory (her husband was a gifted student of de Broglie).





This latter point is made very strongly by Adrian Kent in his contribution to the above cited volume of collected essays (pp. 307--354).

Kent considers a toy multiverse, which is classical, but in which branches are multiplied to record all possible results. The first such world he considers includes conscious inhabitants, but which also includes a machine with a red button on it, and a tape emerging from it, with a sequence of numbers on it, all in the range 0 to (N-1). When the red button is pressed in some universe within the multiverse, that universe is deleted, and N successor universes are then created. All the successors are in the same classical state as the original (and so, by hypothesis, all include conscious inhabitants with the same memories as those who have just been deleted), except that a new number has been written onto the end of the tape, with the number 'i' being written in the 'i'-th successor universe.

Suppose, further, that some of the inhabitants of this multiverse have acquired the theoretical idea that the laws of their multiverse might attach 'weights' to branches, i.e., a number p_i is attached to branch 'i', where p_i >= 0 and Sum_i p_i = 1. They might have various different  theories about how these weights are defined.... To be clear: this is not to say that the branches have equal weight. Nor are they necessarily physically identical, aside from the tape numbers. However, any such differences do not yield any natural quantitative definition of branch weights. There is just no fact of the matter about branch weights in this multiverse.


OK. Note that I cannot assume a multiverse or anything like that. I have only 0, s0, ss0, sss0, …., and the measure will be on the computations (either the halting one which have a name, thus are numbers, or are nameable sequence of numbers, + some complications for the machine with oracles), but the whole is structured by the mathematics of machine self-reference. Formally this provides a quantisation, and a mesure “one” theory which obeys quantum tautologies. The toy multiverse seems to assume some brain-mind thesis, which I have explained is incoherent with Mechanism.




Kent goes on the say:

"Everettian quantum theory is essentially useless, as a scientific theory, unless it can explain the data that confirm the validity of the Copenhagen quantum theory within its domain---unless, for example, it can explain why we should expect to observe the Born rule to have been very well confirmed statistically. Evidently, Everettians cannot give an explanation that says that all observers in the multiverse will observe confirmation of the Born rule, or that very probably all observers will observe confirmation of the Born rule. On the contrary, many observers in an Everettian multiverse will definitely observe convincing 'discomfirmation' of the Born rule.

Here, let me say that the theorem of Gleason reassure me a little bit. Once we get three dimensions, the formalisme indicates that the measure is unique. To be sure, the quantum logic “in the head of the universal machine” is not enough developed to be able to use Gleason theorem, but the evidences add that something like this is quite plausible. 
Mentioning the Copenhagen theory is not convincing to me, as it makes not much sense, and no sense at all with Mechanism.Everett axiom is not much more than the idea that observers obeys to QM too.




"It suffices to consider very simple many-worlds theories, containing classical branching worlds

To be sure, that never really exists. There are no worlds at all, only coherent collection of histories/computations. 


in which the branches correspond to binary outcomes of definite experiments.

Of a finite number if definite experiment, but made in an infinity of histories at once. 



Consider thus the 'weightless multiverse', a many-worlds of the type outlined above, in which the machine produces only two possible outcomes, writing 0 or 1 onto the tape. Suppose now that the inhabitants begin a series of experiments in which they push the red button on the machine a large number, N, times, at regular intervals. Suppose too that the inhabitants believe (correctly) that this is a series of independent identical experiments, and moreover believe this 'dogmatically': no pattern in the data will shake their faith. Suppose also that they believe (incorrectly) that their multiverse is governed by a many-worlds theory with unknown weights attached to the 0 and 1 outcomes; identical in each trial, and seek to discover the (actually non-existent) values of these weights.

"After N trials, the multiverse contains 2^N branches, corresponding to all 2^N possible binary string outcomes. The inhabitants on a string with pN zero and (1 - p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight 'p' is attached to zero outcome branches and weight (1 - p) is attached to one outcome branches. In other words, everyone, no matter what string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights.

"Let's consider further the perspective of inhabitants on a branch with 'pN' zero outcomes and '(1 - p)N' one outcomes. They do not have the delusion that all observed strings have the same relative frequency as theirs: they understand that, given the hypothesis that they live in a multiverse, 'every' binary string, and hence every relative frequency, will have been observed by someone. So how do they conclude that the theory that the weights are '(p,1 - p)' has nonetheless been confirmed?. Because they have concluded that the weights measure the 'importance' of the branches for theory confimation. Since they believe they have learned that the weights are '(p,1 - p)', they conclude that a branch with 'r' zeros and '(N - r)' ones has importance p^r(1 - p)^{N-r}. Summing over all branches with 'pN' zeros and '(1 - p)N' ones, or very close to those frequencies, thus gives a set of total importance very close to 1; the remaining branches have total importance very close to zero. So, on the set of branches that dominate the importance measure, the theory that the weights are (very close to) (p,1 - p) is indeed correct. All is well! By definition, the important branches are the ones that matter for theory confimation. The theory is inded confirmed!

"The problem, of course, is that this reasoning applies equally well for all the inhabitants, whatever relative frequency 'p' they see on their branch. All of them conclude that their relative frequencies represent (to very good approximation) the branching weights. All of them conclude that their own branches, together with those with identical or similar relative frequencies, are the important ones for theory confirmation. All of them thus happily conclude that their theories have been confirmed. And, recall, all of them are wrong: there are actually no branching weights.”

I do not understand. If the multiverse is that sort of many classical world, with the machine giving all outputs somewhere, the correct weighting will be the one given by Pascal Binomial. That comes already with the fact that we get all 2^N strings. I might have miss something.

Do you agree that in the iterated self- (WM)-duplication, the measure is just the normal distribution?



This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed.


They normally just get relatively rare.


In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


With Mechanism (used in Darwin) I don’t see how we can evacuate that the prediction are given by relative (even conditional) measure, on all computations. 

But in QM, once we simply reject dualism (in philosophy of mind/cognitive science) the observer has to obey to QM, and we get a measure problem anyway. 

A collapse theory invoke an unknown theory of mind, which is not a problem to use QM in the FAPP mode, but to solve deep problem like the mind-body problem, such nuance counts, and the Everett Many-Worlds are welcome as they looks like the many-histories interpretation which is unavoidable for the universal number existing in the arithmetical reality.

Bruno





Bruce


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 7, 2020, 2:45:11 PM2/7/20
to everyth...@googlegroups.com


On 2/7/2020 3:07 AM, Bruce Kellett wrote:
On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

In practice one doesn't look for a measure on specific outcomes sequences because you're testing a theory that only predicts one probability.  You flip coins to test whether P(heads)=0.5 which you can confirm or refute without even knowing the sequences.  It might be that every sequence you get by flipping is in the form HTHTHTHTHTHTHT... which would support P(H)=0.5.  It would be a different world than ours, possibly with different physics; but that would be a matter of  testing a different theory.

One of the problems with MWI is that can't seem to explain probability without sneaking in some equivalent concept. The obvious version of MWI would be branch counting in which every measurement-like event produces an enormous number of branches and the number of branches with spin UP relative to the number with spin DOWN gives the odds of spin UP.  A meta-physical difficulty is the all the spin UP branches are identical and so by Leibniz's identity of indiscernibles are really only one; but maybe this inapplicable since the measure involves lots of environment that would make it discernible.

Brent


Bruce

 
An observer might see their branch as having sufficient randomness to be a Bayesian prior, but to derive a full theory these outlier branches with the appearance of structure have to be eliminated. This is not a devastating blow to MWI, but it is a limitation on its explanatory power. Of course with statistical physics we have these logarithms and the rest and such slop tends to be "washed out" for large enough sample space. 

No matter how hard we try it is tough to make this all epistemic, say Bayesian etc, or ontological with frequentist statistics. 

LC 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 7, 2020, 3:27:25 PM2/7/20
to everyth...@googlegroups.com
It's not only MWI, it's also the infinite universe where there are infinitely many copies of you and where everything happens.  And the multiverse where all possible (mathematically consistent?) universes exist.  We need a way to think about these "infinities".  Are they meaningful?  What would it mean to get rid of them and theorize that everything is finite?  Are there some intermediate options?  Where are the meta-physicists when you need them?

Brent

Bruce Kellett

unread,
Feb 7, 2020, 5:16:22 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 6:45 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 3:07 AM, Bruce Kellett wrote:
On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

In practice one doesn't look for a measure on specific outcomes sequences because you're testing a theory that only predicts one probability.  You flip coins to test whether P(heads)=0.5 which you can confirm or refute without even knowing the sequences.

The point of Kent's argument is that in MWI where all outcomes occur, you will get the same set of sequences of results whatever the intrinsic probabilities might be. So you cannot use data from any one sequence to test a hypothesis about the probabilities: the sequences obtained are independent of any underlying probability measure.

It might be that every sequence you get by flipping is in the form HTHTHTHTHTHTHT... which would support P(H)=0.5.  It would be a different world than ours, possibly with different physics; but that would be a matter of  testing a different theory.

One of the problems with MWI is that can't seem to explain probability without sneaking in some equivalent concept. The obvious version of MWI would be branch counting in which every measurement-like event produces an enormous number of branches and the number of branches with spin UP relative to the number with spin DOWN gives the odds of spin UP.  A meta-physical difficulty is the all the spin UP branches are identical and so by Leibniz's identity of indiscernibles are really only one; but maybe this inapplicable since the measure involves lots of environment that would make it discernible.

That seems to be rather beside the point.

Bruce

Philip Thrift

unread,
Feb 7, 2020, 5:34:11 PM2/7/20
to Everything List
It's possible that there is only finite amount of space and matter, but a truly infinite amount of time.

Perhaps the universe (finite amount of space and matter) - running stochastically from time 0 to t_max - cycles again and again, an infinite number of times.

@philipthrift

Bruce Kellett

unread,
Feb 7, 2020, 5:36:24 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 5:23 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 7 Feb 2020, at 05:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


"After N trials, the multiverse contains 2^N branches, corresponding to all 2^N possible binary string outcomes. The inhabitants on a string with pN zero and (1 - p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight 'p' is attached to zero outcome branches and weight (1 - p) is attached to one outcome branches. In other words, everyone, no matter what string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights.

"Let's consider further the perspective of inhabitants on a branch with 'pN' zero outcomes and '(1 - p)N' one outcomes. They do not have the delusion that all observed strings have the same relative frequency as theirs: they understand that, given the hypothesis that they live in a multiverse, 'every' binary string, and hence every relative frequency, will have been observed by someone. So how do they conclude that the theory that the weights are '(p,1 - p)' has nonetheless been confirmed?. Because they have concluded that the weights measure the 'importance' of the branches for theory confimation. Since they believe they have learned that the weights are '(p,1 - p)', they conclude that a branch with 'r' zeros and '(N - r)' ones has importance p^r(1 - p)^{N-r}. Summing over all branches with 'pN' zeros and '(1 - p)N' ones, or very close to those frequencies, thus gives a set of total importance very close to 1; the remaining branches have total importance very close to zero. So, on the set of branches that dominate the importance measure, the theory that the weights are (very close to) (p,1 - p) is indeed correct. All is well! By definition, the important branches are the ones that matter for theory confimation. The theory is inded confirmed!

"The problem, of course, is that this reasoning applies equally well for all the inhabitants, whatever relative frequency 'p' they see on their branch. All of them conclude that their relative frequencies represent (to very good approximation) the branching weights. All of them conclude that their own branches, together with those with identical or similar relative frequencies, are the important ones for theory confirmation. All of them thus happily conclude that their theories have been confirmed. And, recall, all of them are wrong: there are actually no branching weights.”

I do not understand. If the multiverse is that sort of many classical world, with the machine giving all outputs somewhere, the correct weighting will be the one given by Pascal Binomial. That comes already with the fact that we get all 2^N strings. I might have miss something.

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

 

Do you agree that in the iterated self- (WM)-duplication, the measure is just the normal distribution?

No. As I have said before, no meaningful concept of probability can be applied in the WM-duplication case. Since no meaningful concept of probability applies when all outcome are guaranteed to happen, no probability measure can be assigned.


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed.

They normally just get relatively rare.

It is the attempted proof of this that breaks down when all outcomes are guaranteed to occur.

In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


With Mechanism (used in Darwin) I don’t see how we can evacuate that the prediction are given by relative (even conditional) measure, on all computations.

This has nothing to do with mechanism: it is simple an observation about Everettian quantum  mechanics. If you want to talk about some other theory, such as mechanism, we can do that. But I think mechanism fails at step 3 for reasons similar to those that undermine Everett.

Bruce

Bruce Kellett

unread,
Feb 7, 2020, 7:16:45 PM2/7/20
to everyth...@googlegroups.com
Many-worlds might be logically possible, but it is also completely useless. If every possible outcome from any experiment/interaction actually occurs, then the total data that results is independent of any probability measure. Consequently, one cannot use data from experiments to infer anything about any underlying probabilities, even if such exist at all. In particular, Many-worlds is incompatible with the Born rule, and with the overwhelming amount of evidence confirming the Born rule in quantum mechanics. So Many-worlds (and Everett) is a failed theory, disconfirmed by every experiment ever performed. If Many-worlds is correct, then the inhabitants have no basis on which to have any expectations about what they might see.

Bruce

Lawrence Crowell

unread,
Feb 7, 2020, 7:33:24 PM2/7/20
to Everything List
On Friday, February 7, 2020 at 4:36:24 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 5:23 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 7 Feb 2020, at 05:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


"After N trials, the multiverse contains 2^N branches, corresponding to all 2^N possible binary string outcomes. The inhabitants on a string with pN zero and (1 - p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight 'p' is attached to zero outcome branches and weight (1 - p) is attached to one outcome branches. In other words, everyone, no matter what string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights.

"Let's consider further the perspective of inhabitants on a branch with 'pN' zero outcomes and '(1 - p)N' one outcomes. They do not have the delusion that all observed strings have the same relative frequency as theirs: they understand that, given the hypothesis that they live in a multiverse, 'every' binary string, and hence every relative frequency, will have been observed by someone. So how do they conclude that the theory that the weights are '(p,1 - p)' has nonetheless been confirmed?. Because they have concluded that the weights measure the 'importance' of the branches for theory confimation. Since they believe they have learned that the weights are '(p,1 - p)', they conclude that a branch with 'r' zeros and '(N - r)' ones has importance p^r(1 - p)^{N-r}. Summing over all branches with 'pN' zeros and '(1 - p)N' ones, or very close to those frequencies, thus gives a set of total importance very close to 1; the remaining branches have total importance very close to zero. So, on the set of branches that dominate the importance measure, the theory that the weights are (very close to) (p,1 - p) is indeed correct. All is well! By definition, the important branches are the ones that matter for theory confimation. The theory is inded confirmed!

"The problem, of course, is that this reasoning applies equally well for all the inhabitants, whatever relative frequency 'p' they see on their branch. All of them conclude that their relative frequencies represent (to very good approximation) the branching weights. All of them conclude that their own branches, together with those with identical or similar relative frequencies, are the important ones for theory confirmation. All of them thus happily conclude that their theories have been confirmed. And, recall, all of them are wrong: there are actually no branching weights.”

I do not understand. If the multiverse is that sort of many classical world, with the machine giving all outputs somewhere, the correct weighting will be the one given by Pascal Binomial. That comes already with the fact that we get all 2^N strings. I might have miss something.

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

 

Do you agree that in the iterated self- (WM)-duplication, the measure is just the normal distribution?

No. As I have said before, no meaningful concept of probability can be applied in the WM-duplication case. Since no meaningful concept of probability applies when all outcome are guaranteed to happen, no probability measure can be assigned.

This is not quite as fatal as you think. Consider a simple entanglement system such as Rabi flopping. This is a high-Q cavity with a single atom and a photon. The photon is tuned to the energy gap between two atomic levels. This atom absorbs the photon and re-emits it, and at all times there is a probability given by a square of a cosine or sine function that varies with time. As this system dithers away there is then some time where the wave function, from the perspective of the outside collapse. In the MWI the observer is in some ways "frame dragged" along a certain quantum amplitude that from an observation perspective is now unit. It actually matters little to the observer what the probability was for finding either an excited atom plus no photon or the photon and atom in ground state. When it comes to probabilities, those are important in QM when the experiment has the system in a superposed or entangled state. 

In MWI while the observer has measured a certain outcome the system is now an entanglement of two worlds that keeps the Rabi oscillation going. It does not particularly make any difference what the probability for the measured outcome was just prior to the measurement. Also in the post measurement period, while the system may in a global context keeps flopping since probabilities for the outcome in both of the worlds is unity, this has no observable consequence. Where things do get a bit strange with this is in the infinitesimal time period where the probability for either outcome is in the limit zero, and in the MWI perspective this is still a "world path."

LC

Lawrence Crowell

unread,
Feb 7, 2020, 7:51:44 PM2/7/20
to Everything List
On Friday, February 7, 2020 at 6:16:45 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 7 Feb 2020 at 15:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


Carroll and Sebens worked a paper a year ago illustrating how MWI was consistent with Born rule. They did have to restrict paths or states that were too far removed from being a good Bayeisan prior, so it is a bit loose. However, it was not bad.

The inability to define a clear probability to a particular world path is argued to be one reason that MWI is the best interpretation to work quantum gravitation. This is a sort of nonlocality. I am not sure this clinches MWI as the clearly superior interpretation. Much the same nonlocality can be identified with quantum spacetime if it is built up from quantum entanglements, thus avoiding the use of an interpretation.

MWI is sworn by a number of physicists, though Copenhagen still holds it own and Qubism is growing adherents. Qubism actually also has a few things going for it. I frankly see all of these as ancillary postulates that have limited usefulness and mostly useful in expositories.

LC

Bruce Kellett

unread,
Feb 7, 2020, 8:10:54 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 11:51 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Friday, February 7, 2020 at 6:16:45 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 7 Feb 2020 at 15:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


Carroll and Sebens worked a paper a year ago illustrating how MWI was consistent with Born rule. They did have to restrict paths or states that were too far removed from being a good Bayeisan prior, so it is a bit loose. However, it was not bad.

Not bad!!!! I suppose if you feel justified in just throwing away anything that does not suit your favourite theory, then you can get away with anything.  It is the fact that these 'worlds' that are far removed from what one wants to see cannot just be "thrown away" that destroys MWI. Given that the probability of particular outcomes no longer has meaning when all outcomes necessarily occur, one cannot use any observed data to justify any theory about the probabilities. All theories are just as good, or just as bad. Consequently, assuming probabilities for particular outcomes no longer makes any sense.


The inability to define a clear probability to a particular world path is argued to be one reason that MWI is the best interpretation to work quantum gravitation. This is a sort of nonlocality. I am not sure this clinches MWI as the clearly superior interpretation. Much the same nonlocality can be identified with quantum spacetime if it is built up from quantum entanglements, thus avoiding the use of an interpretation.

I doubt that anything along these lines is going to resolve the basic problem.

MWI is sworn by a number of physicists, though Copenhagen still holds it own and Qubism is growing adherents. Qubism actually also has a few things going for it. I frankly see all of these as ancillary postulates that have limited usefulness and mostly useful in expositories.

Perhaps some interpretations make more sense than others. It seems, from the considerations that I have raised, that, despite what many physicists say about MWI, it is a failure as an interpretation of QM -- it does not allow one to use experimental data to evaluate the theory one way or the other. As Kent says, "Everettian quantum theory is essentially useless, as a scientific theory, unless it can explain the data that confirms the validity of standard quantum mechanics." And Everett cannot do this.

Bruce

Brent Meeker

unread,
Feb 7, 2020, 8:23:04 PM2/7/20
to everyth...@googlegroups.com


On 2/7/2020 2:16 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 6:45 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 3:07 AM, Bruce Kellett wrote:
On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

In practice one doesn't look for a measure on specific outcomes sequences because you're testing a theory that only predicts one probability.  You flip coins to test whether P(heads)=0.5 which you can confirm or refute without even knowing the sequences.

The point of Kent's argument is that in MWI where all outcomes occur, you will get the same set of sequences of results whatever the intrinsic probabilities might be. So you cannot use data from any one sequence to test a hypothesis about the probabilities: the sequences obtained are independent of any underlying probability measure.

Why not?  Most copies of me will see sequences with approximately equal numbers of H and T.  In fact we do use data from one sequence, which ever one our accelerator produces, even though the theory we're testing predicts that all sequences are possible.  But we don't compare sequences; we compare statistics on the sequences and compare those to predicted probabilities. 

Whether sequences are independent of "underlying probabilities" is a different problem.  First, one can't legitimately assume underlying probabilities when trying to justify the existence of a probability measure.  Second, the simple way to postulate a measure is just counting branches, which means that there must be many repetitions of the same sequence on different branches in order to realize probability values that aren't integer ratios

Brent


It might be that every sequence you get by flipping is in the form HTHTHTHTHTHTHT... which would support P(H)=0.5.  It would be a different world than ours, possibly with different physics; but that would be a matter of  testing a different theory.

One of the problems with MWI is that can't seem to explain probability without sneaking in some equivalent concept. The obvious version of MWI would be branch counting in which every measurement-like event produces an enormous number of branches and the number of branches with spin UP relative to the number with spin DOWN gives the odds of spin UP.  A meta-physical difficulty is the all the spin UP branches are identical and so by Leibniz's identity of indiscernibles are really only one; but maybe this inapplicable since the measure involves lots of environment that would make it discernible.

That seems to be rather beside the point.

Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Lawrence Crowell

unread,
Feb 7, 2020, 8:28:24 PM2/7/20
to Everything List
On Friday, February 7, 2020 at 7:10:54 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 11:51 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Friday, February 7, 2020 at 6:16:45 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 7 Feb 2020 at 15:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


Carroll and Sebens worked a paper a year ago illustrating how MWI was consistent with Born rule. They did have to restrict paths or states that were too far removed from being a good Bayeisan prior, so it is a bit loose. However, it was not bad.

Not bad!!!! I suppose if you feel justified in just throwing away anything that does not suit your favourite theory, then you can get away with anything.  It is the fact that these 'worlds' that are far removed from what one wants to see cannot just be "thrown away" that destroys MWI. Given that the probability of particular outcomes no longer has meaning when all outcomes necessarily occur, one cannot use any observed data to justify any theory about the probabilities. All theories are just as good, or just as bad. Consequently, assuming probabilities for particular outcomes no longer makes any sense.


The set of amplitudes or paths thrown away is a small measure. The bounds are not entirely certain, but they are comparatively small.
 

The inability to define a clear probability to a particular world path is argued to be one reason that MWI is the best interpretation to work quantum gravitation. This is a sort of nonlocality. I am not sure this clinches MWI as the clearly superior interpretation. Much the same nonlocality can be identified with quantum spacetime if it is built up from quantum entanglements, thus avoiding the use of an interpretation.

I doubt that anything along these lines is going to resolve the basic problem.

MWI is sworn by a number of physicists, though Copenhagen still holds it own and Qubism is growing adherents. Qubism actually also has a few things going for it. I frankly see all of these as ancillary postulates that have limited usefulness and mostly useful in expositories.

Perhaps some interpretations make more sense than others. It seems, from the considerations that I have raised, that, despite what many physicists say about MWI, it is a failure as an interpretation of QM -- it does not allow one to use experimental data to evaluate the theory one way or the other. As Kent says, "Everettian quantum theory is essentially useless, as a scientific theory, unless it can explain the data that confirms the validity of standard quantum mechanics." And Everett cannot do this.

Bruce

The operative word is theory, and I do not see quantum interpretations as theories. They are more in a sense metaphysics used to provide some explanatory means to makes QM more understandable to our classical brains.  

LC

Brent Meeker

unread,
Feb 7, 2020, 8:48:10 PM2/7/20
to everyth...@googlegroups.com


On 2/7/2020 2:36 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 5:23 AM Bruno Marchal <mar...@ulb.ac.be> wrote:
On 7 Feb 2020, at 05:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


"After N trials, the multiverse contains 2^N branches, corresponding to all 2^N possible binary string outcomes. The inhabitants on a string with pN zero and (1 - p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight 'p' is attached to zero outcome branches and weight (1 - p) is attached to one outcome branches. In other words, everyone, no matter what string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights.

"Let's consider further the perspective of inhabitants on a branch with 'pN' zero outcomes and '(1 - p)N' one outcomes. They do not have the delusion that all observed strings have the same relative frequency as theirs: they understand that, given the hypothesis that they live in a multiverse, 'every' binary string, and hence every relative frequency, will have been observed by someone. So how do they conclude that the theory that the weights are '(p,1 - p)' has nonetheless been confirmed?. Because they have concluded that the weights measure the 'importance' of the branches for theory confimation. Since they believe they have learned that the weights are '(p,1 - p)', they conclude that a branch with 'r' zeros and '(N - r)' ones has importance p^r(1 - p)^{N-r}. Summing over all branches with 'pN' zeros and '(1 - p)N' ones, or very close to those frequencies, thus gives a set of total importance very close to 1; the remaining branches have total importance very close to zero. So, on the set of branches that dominate the importance measure, the theory that the weights are (very close to) (p,1 - p) is indeed correct. All is well! By definition, the important branches are the ones that matter for theory confimation. The theory is inded confirmed!

"The problem, of course, is that this reasoning applies equally well for all the inhabitants, whatever relative frequency 'p' they see on their branch. All of them conclude that their relative frequencies represent (to very good approximation) the branching weights. All of them conclude that their own branches, together with those with identical or similar relative frequencies, are the important ones for theory confirmation. All of them thus happily conclude that their theories have been confirmed. And, recall, all of them are wrong: there are actually no branching weights.”

I do not understand. If the multiverse is that sort of many classical world, with the machine giving all outputs somewhere, the correct weighting will be the one given by Pascal Binomial. That comes already with the fact that we get all 2^N strings. I might have miss something.

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

So if the experiment is to toss a coin six times, there will be a branch of the MW where HTHHTHHHHH is observed and this will confirm the theory that H's are four times as probable as T's.  But there will be many more branches where it is found that P(H)=P(T) (252 vs 45).  And in the limit of large experiments almost all experimenters (in the MW) will find P(H)~P(T).  Hence almost all experimenters will conclude something close to the presumed true value.

This however depends on the assumption that each sequence of H and T occurs in one branch of the MW.  Other probability values, like  1/pi, are going to require very large numbers of branches to approximate.

Brent


 

Do you agree that in the iterated self- (WM)-duplication, the measure is just the normal distribution?

No. As I have said before, no meaningful concept of probability can be applied in the WM-duplication case. Since no meaningful concept of probability applies when all outcome are guaranteed to happen, no probability measure can be assigned.


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed.

They normally just get relatively rare.

It is the attempted proof of this that breaks down when all outcomes are guaranteed to occur.

In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


With Mechanism (used in Darwin) I don’t see how we can evacuate that the prediction are given by relative (even conditional) measure, on all computations.

This has nothing to do with mechanism: it is simple an observation about Everettian quantum  mechanics. If you want to talk about some other theory, such as mechanism, we can do that. But I think mechanism fails at step 3 for reasons similar to those that undermine Everett.

Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 7, 2020, 8:54:11 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 12:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 2:16 PM, Bruce Kellett wrote:

The point of Kent's argument is that in MWI where all outcomes occur, you will get the same set of sequences of results whatever the intrinsic probabilities might be. So you cannot use data from any one sequence to test a hypothesis about the probabilities: the sequences obtained are independent of any underlying probability measure.

Why not?  Most copies of me will see sequences with approximately equal numbers of H and T.

You are making the mistake that many commentators make: you are thinking of the distribution over the set of all possible sequences, and then assuming that we sample at random from this set. But that is not how experiments are done. We run the experiment N times and obtain some sequence of results. We then use the data so obtained to compare with our theory. There is no random selection from the set of all possible sequences. In fact, in MWI, there is one observer for every possible sequence, and we have to consider what each of them, in isolation, will conclude. Many will see the Born rule disconfirmed.


 
  In fact we do use data from one sequence, which ever one our accelerator produces, even though the theory we're testing predicts that all sequences are possible.  But we don't compare sequences; we compare statistics on the sequences and compare those to predicted probabilities.

That is just a fantasy made up to get out of a difficulty. That is not how science proceeds.

Of course, if many-worlds is correct and every possible outcome occurs for every trial, then given the probability deduced from one set of N trials, we can always attempt to confirm this result by doing another set of trials. The problem is that the second set of trials is quite like to give a different result from the first. That also would count as a disconfirmation of the theory.

Whether sequences are independent of "underlying probabilities" is a different problem.  First, one can't legitimately assume underlying probabilities when trying to justify the existence of a probability measure.

In the first instance, we are not trying to justify the existence of a probability measure. We are trying to see if experimental data can confirm a particular theory.

 
Second, the simple way to postulate a measure is just counting branches, which means that there must be many repetitions of the same sequence on different branches in order to realize probability values that aren't integer ratios


Branch counting has a bad reputation as a basis for a probability measure. One problem, as Wallace for instance points out, is that the number of branches is never well-defined, so no clear count is available. There are other problems, which have led to the abandonment of this approach to probability.

Bruce

Bruce Kellett

unread,
Feb 7, 2020, 8:57:24 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 12:28 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Friday, February 7, 2020 at 7:10:54 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 11:51 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Friday, February 7, 2020 at 6:16:45 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 7 Feb 2020 at 15:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.


Carroll and Sebens worked a paper a year ago illustrating how MWI was consistent with Born rule. They did have to restrict paths or states that were too far removed from being a good Bayeisan prior, so it is a bit loose. However, it was not bad.

Not bad!!!! I suppose if you feel justified in just throwing away anything that does not suit your favourite theory, then you can get away with anything.  It is the fact that these 'worlds' that are far removed from what one wants to see cannot just be "thrown away" that destroys MWI. Given that the probability of particular outcomes no longer has meaning when all outcomes necessarily occur, one cannot use any observed data to justify any theory about the probabilities. All theories are just as good, or just as bad. Consequently, assuming probabilities for particular outcomes no longer makes any sense.


The set of amplitudes or paths thrown away is a small measure. The bounds are not entirely certain, but they are comparatively small.


The problem is to justify that the paths thrown away do, in fact, have small measure. The proof given by Kent shows that, whatever result you obtain, you can argue that contrary results have "small measure", and can be thrown away. There is nothing that picks out one particular set of paths as preferred in the many-worlds situation. One can only get that in a stochastic one-world model.

Bruce

Bruce Kellett

unread,
Feb 7, 2020, 9:04:31 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 12:48 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 2:36 PM, Bruce Kellett wrote:

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

So if the experiment is to toss a coin six times, there will be a branch of the MW where HTHHTHHHHH is observed

If you observe that result on six tosses, then something is seriously wrong :-).

and this will confirm the theory that H's are four times as probable as T's.  But there will be many more branches where it is found that P(H)=P(T) (252 vs 45).  And in the limit of large experiments almost all experimenters (in the MW) will find P(H)~P(T).  Hence almost all experimenters will conclude something close to the presumed true value.

But experiments are not conducted by polling all possible observers. One cannot communicate with those on other branches, so this is just silly. The experimenter has only his own data to work with, and he must make whatever deductions he can using only that data.

This however depends on the assumption that each sequence of H and T occurs in one branch of the MW.  Other probability values, like  1/pi, are going to require very large numbers of branches to approximate.

Irrelevant.

Bruce

Brent Meeker

unread,
Feb 7, 2020, 9:17:58 PM2/7/20
to everyth...@googlegroups.com


On 2/7/2020 5:53 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 12:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 2:16 PM, Bruce Kellett wrote:

The point of Kent's argument is that in MWI where all outcomes occur, you will get the same set of sequences of results whatever the intrinsic probabilities might be. So you cannot use data from any one sequence to test a hypothesis about the probabilities: the sequences obtained are independent of any underlying probability measure.

Why not?  Most copies of me will see sequences with approximately equal numbers of H and T.

You are making the mistake that many commentators make: you are thinking of the distribution over the set of all possible sequences, and then assuming that we sample at random from this set. But that is not how experiments are done. We run the experiment N times and obtain some sequence of results. We then use the data so obtained to compare with our theory. There is no random selection from the set of all possible sequences. In fact, in MWI, there is one observer for every possible sequence, and we have to consider what each of them, in isolation, will conclude. Many will see the Born rule disconfirmed.

But in the limit of large N those who see the rule disconfirmed will be small.




 
  In fact we do use data from one sequence, which ever one our accelerator produces, even though the theory we're testing predicts that all sequences are possible.  But we don't compare sequences; we compare statistics on the sequences and compare those to predicted probabilities.

That is just a fantasy made up to get out of a difficulty. That is not how science proceeds.

I beg to differ.  Who compares sequences of double photon production at the LHC?  The data I see is always derived statistics.



Of course, if many-worlds is correct and every possible outcome occurs for every trial, then given the probability deduced from one set of N trials, we can always attempt to confirm this result by doing another set of trials. The problem is that the second set of trials is quite like to give a different result from the first. That also would count as a disconfirmation of the theory.

How is that different than in a single world.  Sequences in probabilistic experiments give different results.  We don't count that as disconfirmation because we look at the statistics and say, "Oh that doesn't agree with the Kellet experiment, but it's well within the confidence bounds, so they both confirm the theory."



Whether sequences are independent of "underlying probabilities" is a different problem.  First, one can't legitimately assume underlying probabilities when trying to justify the existence of a probability measure.

In the first instance, we are not trying to justify the existence of a probability measure. We are trying to see if experimental data can confirm a particular theory.

 
Second, the simple way to postulate a measure is just counting branches, which means that there must be many repetitions of the same sequence on different branches in order to realize probability values that aren't integer ratios


Branch counting has a bad reputation as a basis for a probability measure. One problem, as Wallace for instance points out, is that the number of branches is never well-defined, so no clear count is available.

Right.  The number has to be essentially infinite in order that irrational probabilities can be represented.  But it can't be actually countably infinite because then that creates the problem of defining a measure over infinitely many integers.  So it seems it must be bigger than any number ever measured, but not infinite.


There are other problems, which have led to the abandonment of this approach to probability.

I'm not aware of any other problems...aside from the mere extravagance and lack of function of MWI.

Brent


Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 7, 2020, 9:26:50 PM2/7/20
to everyth...@googlegroups.com
But that is answering the inverse problem.  It's showing that there are experimenters who will verify wrong theories and would throw away many values in the MWI from the God's eye view.  But they don't know of those values.  The point is that the experimenters who do this and accept the wrong theory are small in number.  So we may rationally expect to be among those who are right.


There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.


One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.  The only difference is that some probability measure is assumed as part of the model.

Brent

Brent Meeker

unread,
Feb 7, 2020, 9:30:36 PM2/7/20
to everyth...@googlegroups.com


On 2/7/2020 6:04 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 12:48 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 2:36 PM, Bruce Kellett wrote:

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

So if the experiment is to toss a coin six times, there will be a branch of the MW where HTHHTHHHHH is observed

If you observe that result on six tosses, then something is seriously wrong :-).

and this will confirm the theory that H's are four times as probable as T's.  But there will be many more branches where it is found that P(H)=P(T) (252 vs 45).  And in the limit of large experiments almost all experimenters (in the MW) will find P(H)~P(T).  Hence almost all experimenters will conclude something close to the presumed true value.

But experiments are not conducted by polling all possible observers. One cannot communicate with those on other branches, so this is just silly. The experimenter has only his own data to work with, and he must make whatever deductions he can using only that data.

Right.  So he may, depending on the branch, infer the wrong conclusion.  But such observers are small in number in the limit of many experiments and longer experiments.  So we too make our deductions based on the data we observe.  And the above analysis gives us reason to think we will be among those getting the data that supports the right theory.

Brent


This however depends on the assumption that each sequence of H and T occurs in one branch of the MW.  Other probability values, like  1/pi, are going to require very large numbers of branches to approximate.

Irrelevant.

Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 7, 2020, 11:08:23 PM2/7/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 1:30 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 6:04 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 12:48 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 2:36 PM, Bruce Kellett wrote:

You certainly have. The argument that output strings that give results inconsistent with your observations have vanishing measure overall -- an argument based on the Pascal Binomial and the law of large numbers -- applies equally to all observers, whatever output string they observe. So whatever data you observe, you conclude that the theory that is consistent with that data is confirmed by the data. Which is useless, because you reach that conclusion whatever data you observe. The law of large numbers fails you when all possible outcomes are observed by someone or the other.

So if the experiment is to toss a coin six times, there will be a branch of the MW where HTHHTHHHHH is observed

If you observe that result on six tosses, then something is seriously wrong :-).

and this will confirm the theory that H's are four times as probable as T's.  But there will be many more branches where it is found that P(H)=P(T) (252 vs 45).  And in the limit of large experiments almost all experimenters (in the MW) will find P(H)~P(T).  Hence almost all experimenters will conclude something close to the presumed true value.

But experiments are not conducted by polling all possible observers. One cannot communicate with those on other branches, so this is just silly. The experimenter has only his own data to work with, and he must make whatever deductions he can using only that data.

Right.  So he may, depending on the branch, infer the wrong conclusion.  But such observers are small in number in the limit of many experiments and longer experiments.  So we too make our deductions based on the data we observe.  And the above analysis gives us reason to think we will be among those getting the data that supports the right theory.

The result at the heart of this is that no matter what set of results you get from your series of trials you will think that the number who get contrary results is small. The laws of large numbers that you are relying on do not apply only to some 'preferred' set of results that conform to the 'correct' theory. Everyone in the Many-worlds will think that they are among those getting data that supports the right theory. Did you not follow Kent's argument?

If all results occur on every trial, the same overall sequences of results from a large number of trials will be obtained, whatever the probabilities in the underlying theory. So one cannot conclude anything about the probabilities from the observed data -- the probabilities have no effect on the data. One can get probabilities only by restricting one's attention to a subset of trials, and there is no principled way to do this in Many-worlds.

Bruce

Bruce Kellett

unread,
Feb 7, 2020, 11:14:33 PM2/7/20
to everyth...@googlegroups.com
Assuming you know what the 'mean' is absent any experiment. Otherwise you are just cherry picking data to support your arbitrary theory.
One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.

No they don't. They are possible, perhaps, but they do not necessarily occur.

  The only difference is that some probability measure is assumed as part of the model.

And this gives one a principled reason for ignoring the paths that are not observed. Low probability has an independent meaning in the one-world case, so one is unlikely to observe a low probability set of results. Not impossible, but of low probability, where that means 'unlikely'. No comparable concept of probability is available in Many-worlds.

Bruce

Stathis Papaioannou

unread,
Feb 7, 2020, 11:15:02 PM2/7/20
to everyth...@googlegroups.com
So are you suggesting that the inhabitants would just see chaos?
--
Stathis Papaioannou

Bruce Kellett

unread,
Feb 7, 2020, 11:19:58 PM2/7/20
to everyth...@googlegroups.com
No, I am suggesting that Many-worlds is a failed theory, unable to account for everyday experience. A stochastic single-world theory is perfectly able to account for what we see.

Bruce

smitra

unread,
Feb 8, 2020, 12:21:51 AM2/8/20
to everyth...@googlegroups.com
Stochastic single word theories make predictions that violate those of
quantum mechanics. If the MWI (in the general sense of there existing a
multiverse rather than any details of how to derive the Born rule) is
not correct, then that's hard to reconcile with known experimental
results. New physics that so far has never been observed needs to be
assumed just to get rid of the Many Worlds. Also, this new physics
should appear not at the as of yet unprobed high energies where the
known laws of physics could plausibly break down, instead it would have
to appear at the mesoscopic or macroscopic scale where the laws of
physics are essentially fixed.

Saibal

Brent Meeker

unread,
Feb 8, 2020, 12:41:12 AM2/8/20
to everyth...@googlegroups.com


On 2/7/2020 8:14 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 5:57 PM, Bruce Kellett wrote:

There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.

Assuming you know what the 'mean' is absent any experiment.

The mean is estimated by the average of the experimental values.


Otherwise you are just cherry picking data to support your arbitrary theory.
One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.

No they don't. They are possible, perhaps, but they do not necessarily occur.

They don't necessarily occur.  But they probabilistic occur.  Otherwise it wouldn't be a stochastic model.  So it seems that all you objections to MWI apply equally.



  The only difference is that some probability measure is assumed as part of the model.

And this gives one a principled reason for ignoring the paths that are not observed.

Why not ignore them because they are not observed?  That's a principled reason.


Low probability has an independent meaning in the one-world case, so one is unlikely to observe a low probability set of results.

One is unlikely to observe a result that is realized in only a small fraction of the MW branches.  I agree that MWI fails to derive the Born rule.  But I don't agree that it is inconsistent with it, given the version of MWI that postulates many branches...not just one per possible outcome.

Brent

Not impossible, but of low probability, where that means 'unlikely'. No comparable concept of probability is available in Many-worlds.

Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 8, 2020, 12:54:24 AM2/8/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 4:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 8:14 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 5:57 PM, Bruce Kellett wrote:

There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.

Assuming you know what the 'mean' is absent any experiment.

The mean is estimated by the average of the experimental values.


In other words, you use the data to infer probabilities. But the same data occur whatever the probabilities, so your backward inference to the probabilities is meaningless.
Otherwise you are just cherry picking data to support your arbitrary theory.
One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.

No they don't. They are possible, perhaps, but they do not necessarily occur.

They don't necessarily occur.  But they probabilistic occur.

What on earth does that mean?

If the probability is very low, then the improbable sequences of results need not occur even if you repeat the experiment 'till the heat death of the universe. In MWI the low weight sequences necessarily occur in every run of the experiments. Do you not see the difference?

  Otherwise it wouldn't be a stochastic model.  So it seems that all you objections to MWI apply equally.


Get a grip, Brent.

  The only difference is that some probability measure is assumed as part of the model.

And this gives one a principled reason for ignoring the paths that are not observed.

Why not ignore them because they are not observed?  That's a principled reason.

That is a one-world theory. And I agree that that is the way to go.

Low probability has an independent meaning in the one-world case, so one is unlikely to observe a low probability set of results.

One is unlikely to observe a result that is realized in only a small fraction of the MW branches.

Why? One does not choose one's results at random from the set of all possible results. In MWI there is always an observer who gets every possible set of results. Why ignore those unfortunates who get rest inconsistent with your pet theory?

  I agree that MWI fails to derive the Born rule.  But I don't agree that it is inconsistent with it, given the version of MWI that postulates many branches...not just one per possible outcome.

The point is that MWI is inconsistent with experience. There will always be observers who get results inconsistent with the Born rule. And we cannot ensure that we are not such observers. So how can we claim that our theory is confirmed by the data? The data are consistent with all possible theories -- or none at all.

Bruce

Bruce Kellett

unread,
Feb 8, 2020, 1:00:59 AM2/8/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
On 08-02-2020 05:19, Bruce Kellett wrote:

> No, I am suggesting that Many-worlds is a failed theory, unable to
> account for everyday experience. A stochastic single-world theory is
> perfectly able to account for what we see.
>
> Bruce

Stochastic single word theories make predictions that violate those of
quantum mechanics.

No they don't. When have violations of the quantum predictions been observed?

If the MWI (in the general sense of there existing a
multiverse rather than any details of how to derive the Born rule) is
not correct, then that's hard to reconcile with known experimental
results.

All experimental results to date are consistent with a single-world theory. There are several possibilities for such a theory, but to date, experiment does not distinguish between them.

New physics that so far has never been observed needs to be
assumed just to get rid of the Many Worlds. Also, this new physics
should appear not at the as of yet unprobed high energies where the
known laws of physics could plausibly break down, instead it would have
to appear at the mesoscopic or macroscopic scale where the laws of
physics are essentially fixed.


Bohm's theory does not require as-yet-unobserved new physics. GRW do postulate a new physical interaction, but that is below the level of current experimental detectability.

Besides, why should you assume that the Schrodinger equation is the ultimate physical law?

Bruce

Stathis Papaioannou

unread,
Feb 8, 2020, 1:26:06 AM2/8/20
to everyth...@googlegroups.com
But is Many Worlds consistent with what we observe or not, and if not, what would we observe if it were true?

For example, that the world was created six thousand years ago is inconsistent with observation, because there are fossils that are millions of years old, we can see light that left stars billions of years ago, and so on. But if God created the world six thousand years ago complete with false evidence that it was much older, that would be consistent with observation, but a bad theory nonetheless. Which is it with Many Worlds?


--
Stathis Papaioannou

Bruce Kellett

unread,
Feb 8, 2020, 1:37:58 AM2/8/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 5:26 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sat, 8 Feb 2020 at 15:19, Bruce Kellett <bhkel...@gmail.com> wrote:
On Sat, Feb 8, 2020 at 3:15 PM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sat, 8 Feb 2020 at 11:16, Bruce Kellett <bhkel...@gmail.com> wrote:
On Sat, Feb 8, 2020 at 4:33 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Fri, 7 Feb 2020 at 15:59, Bruce Kellett <bhke...@optusnet.com.au> wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Nevertheless Many Worlds is at least logically possible. What would the inhabitants expect to see, if not the world we currently see?


Many-worlds might be logically possible, but it is also completely useless. If every possible outcome from any experiment/interaction actually occurs, then the total data that results is independent of any probability measure. Consequently, one cannot use data from experiments to infer anything about any underlying probabilities, even if such exist at all. In particular, Many-worlds is incompatible with the Born rule, and with the overwhelming amount of evidence confirming the Born rule in quantum mechanics. So Many-worlds (and Everett) is a failed theory, disconfirmed by every experiment ever performed. If Many-worlds is correct, then the inhabitants have no basis on which to have any expectations about what they might see.

So are you suggesting that the inhabitants would just see chaos?


No, I am suggesting that Many-worlds is a failed theory, unable to account for everyday experience. A stochastic single-world theory is perfectly able to account for what we see.

But is Many Worlds consistent with what we observe or not,

No. Many-worlds is not confirmed by what we observe. Many worlds is consistent with any values whatsoever for the probabilities, whereas we observe only consistency with the Born rule and calculable probabilities.


and if not, what would we observe if it were true?


I have no idea. A theory without consistent probabilities is not what we observe. 

For example, that the world was created six thousand years ago is inconsistent with observation, because there are fossils that are millions of years old, we can see light that left stars billions of years ago, and so on. But if God created the world six thousand years ago complete with false evidence that it was much older, that would be consistent with observation, but a bad theory nonetheless. Which is it with Many Worlds?

A bad theory that is not consistent with observation. It is on a par with "God did it", because the probability that we would observe what we do if Many-worlds is correct is vanishingly small.

Bruce

Brent Meeker

unread,
Feb 8, 2020, 1:39:31 AM2/8/20
to everyth...@googlegroups.com


On 2/7/2020 9:54 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 4:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 8:14 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 5:57 PM, Bruce Kellett wrote:

There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.

Assuming you know what the 'mean' is absent any experiment.

The mean is estimated by the average of the experimental values.


In other words, you use the data to infer probabilities. But the same data occur whatever the probabilities, so your backward inference to the probabilities is meaningless.
Otherwise you are just cherry picking data to support your arbitrary theory.
One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.

No they don't. They are possible, perhaps, but they do not necessarily occur.

They don't necessarily occur.  But they probabilistic occur.

It means they occur with high probability given enough instances of the experiment.  So I don't see why you attach great significance to all possibilities occurring in MWI.



What on earth does that mean?

If the probability is very low, then the improbable sequences of results need not occur even if you repeat the experiment 'till the heat death of the universe. In MWI the low weight sequences necessarily occur in every run of the experiments. Do you not see the difference?

But the improbable sequences will occur in the same proportion in both scenarios.



  Otherwise it wouldn't be a stochastic model.  So it seems that all you objections to MWI apply equally.


Get a grip, Brent.

  The only difference is that some probability measure is assumed as part of the model.

And this gives one a principled reason for ignoring the paths that are not observed.

Why not ignore them because they are not observed?  That's a principled reason.

That is a one-world theory. And I agree that that is the way to go.

Low probability has an independent meaning in the one-world case, so one is unlikely to observe a low probability set of results.

One is unlikely to observe a result that is realized in only a small fraction of the MW branches.

Why? One does not choose one's results at random from the set of all possible results.

The theory is that which experience "you" have is determined by making a copy of you for each result and one of them, at random, is the "you" who has the experience.  So it is effectively a random sample from the possible results. 

In MWI there is always an observer who gets every possible set of results. Why ignore those unfortunates who get rest inconsistent with your pet theory?

Because they are relatively few in number and hence unlikely to be the "you" who gets the result.



  I agree that MWI fails to derive the Born rule.  But I don't agree that it is inconsistent with it, given the version of MWI that postulates many branches...not just one per possible outcome.

The point is that MWI is inconsistent with experience. There will always be observers who get results inconsistent with the Born rule.

Why do you think you can't get a result inconsistent with the Born rule in one world.  What do you mean by "inconsistent".  The results are probabilistic so they will have degrees of consistency and inconsistency with the Born rule...just as there is a spread of results in MWI.


And we cannot ensure that we are not such observers. So how can we claim that our theory is confirmed by the data? The data are consistent with all possible theories -- or none at all.

But it's not all or nothing.  It's statistics.

Brent


Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 8, 2020, 2:01:04 AM2/8/20
to everyth...@googlegroups.com
On Sat, Feb 8, 2020 at 5:39 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 9:54 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 4:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 8:14 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 5:57 PM, Bruce Kellett wrote:

There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.

Assuming you know what the 'mean' is absent any experiment.

The mean is estimated by the average of the experimental values.


In other words, you use the data to infer probabilities. But the same data occur whatever the probabilities, so your backward inference to the probabilities is meaningless.
Otherwise you are just cherry picking data to support your arbitrary theory.
One can only get that in a stochastic one-world model.

All paths occur in a stochastic one-world model too.

No they don't. They are possible, perhaps, but they do not necessarily occur.

They don't necessarily occur.  But they probabilistic occur.

It means they occur with high probability given enough instances of the experiment.  So I don't see why you attach great significance to all possibilities occurring in MWI.

The problem here is "what constitutes enough instances of the experiment?". In MWI, all sequences occur for ever run of several trials. In a single-world theory, there are some sequences that will have such a low probability that you could wait till the end of time and never see them.

What on earth does that mean?

If the probability is very low, then the improbable sequences of results need not occur even if you repeat the experiment 'till the heat death of the universe. In MWI the low weight sequences necessarily occur in every run of the experiments. Do you not see the difference?

But the improbable sequences will occur in the same proportion in both scenarios.

No they won't. Because we do do an infinite number of repeats of any experiment. But all possible sequences occur on every run in the Many-worlds scenario. That does not seem like the same proportion in both scenarios.

  Otherwise it wouldn't be a stochastic model.  So it seems that all you objections to MWI apply equally.


Get a grip, Brent.

  The only difference is that some probability measure is assumed as part of the model.

And this gives one a principled reason for ignoring the paths that are not observed.

Why not ignore them because they are not observed?  That's a principled reason.

That is a one-world theory. And I agree that that is the way to go.

Low probability has an independent meaning in the one-world case, so one is unlikely to observe a low probability set of results.

One is unlikely to observe a result that is realized in only a small fraction of the MW branches.

Why? One does not choose one's results at random from the set of all possible results.

The theory is that which experience "you" have is determined by making a copy of you for each result and one of them, at random, is the "you" who has the experience.  So it is effectively a random sample from the possible results.

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.



In MWI there is always an observer who gets every possible set of results. Why ignore those unfortunates who get rest inconsistent with your pet theory?

Because they are relatively few in number and hence unlikely to be the "you" who gets the result.

Unlikely to be you? OK, but what about the poor unfortunate who did get the anomalous results. You choose cavalierly to ignore him . But the fact that they might be few in number in the multiverse does not diminish their importance to themselves, even if not to anyone else.

  I agree that MWI fails to derive the Born rule.  But I don't agree that it is inconsistent with it, given the version of MWI that postulates many branches...not just one per possible outcome.

The point is that MWI is inconsistent with experience. There will always be observers who get results inconsistent with the Born rule.

Why do you think you can't get a result inconsistent with the Born rule in one world.

I don't think that. It is just that it is very unlikely -- of low probability. It has probability one with MWI.

What do you mean by "inconsistent".  The results are probabilistic so they will have degrees of consistency and inconsistency with the Born rule...just as there is a spread of results in MWI.

There are no probabilities in MWI. The probability of getting an anomalous set of results for a sequence of measurements of z-spin up for repeated measurement on x-spin up particles is calculable in quantum mechanics. But the result is different in MWI since the probability for any sequence whatsoever is one.


And we cannot ensure that we are not such observers. So how can we claim that our theory is confirmed by the data? The data are consistent with all possible theories -- or none at all.

But it's not all or nothing.  It's statistics.

So if we see anomalous results at the LHC we continue gathering data to ascertain whether it is a real effect, or merely a statistical anomaly. This possibility is not available in MWI (even though people pretend that it is). MWI cannot explain the consistency of the statistical results we observe.

Bruce

Philip Thrift

unread,
Feb 8, 2020, 7:06:57 AM2/8/20
to Everything List


On Friday, February 7, 2020 at 10:19:58 PM UTC-6, Bruce wrote:

A stochastic single-world theory is perfectly able to account for what we see.

Bruce



Victor Stenger said this from the time I first connected with him over 20 years ago.

It is rare to find any physicist in popular media that believes this.
Sabine Hossenfelder doesn't believe this.

@philipthrift 

Lawrence Crowell

unread,
Feb 8, 2020, 8:16:57 AM2/8/20
to Everything List
No matter how hard we try statistics always has this element of subjectivity to it. Since entropy is S = -sum p log(p) the summation is a log and these errors tend not to be very large, As a corollary we have various definition of entropy and ways of computing it. This means that no matter how hard we try physics has this subjective aspect to it, and in a lot of ways Qubism has a few points along these lines.

LC

Lawrence Crowell

unread,
Feb 8, 2020, 8:18:08 AM2/8/20
to Everything List
On Friday, February 7, 2020 at 11:54:24 PM UTC-6, Bruce wrote:
On Sat, Feb 8, 2020 at 4:41 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 8:14 PM, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 5:57 PM, Bruce Kellett wrote:

There is nothing that picks out one particular set of paths as preferred in the many-worlds situation.

Sure you can.  For example you can pick out the set of paths whose statistics are within some bounds of the mean.

Assuming you know what the 'mean' is absent any experiment.

The mean is estimated by the average of the experimental values.


In other words, you use the data to infer probabilities. But the same data occur whatever the probabilities, so your backward inference to the probabilities is meaningless

Bayesian inference works this way,

LC

Brent Meeker

unread,
Feb 8, 2020, 2:38:05 PM2/8/20
to everyth...@googlegroups.com
?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.  That some very improbable results cannot occur in SW QM.  I think you are mistaken.  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.





In MWI there is always an observer who gets every possible set of results. Why ignore those unfortunates who get rest inconsistent with your pet theory?

Because they are relatively few in number and hence unlikely to be the "you" who gets the result.

Unlikely to be you? OK, but what about the poor unfortunate who did get the anomalous results. You choose cavalierly to ignore him . But the fact that they might be few in number in the multiverse does not diminish their importance to themselves, even if not to anyone else.

True.  But the likelihood of being such an unfortunate is the same in either SW or MW.


  I agree that MWI fails to derive the Born rule.  But I don't agree that it is inconsistent with it, given the version of MWI that postulates many branches...not just one per possible outcome.

The point is that MWI is inconsistent with experience. There will always be observers who get results inconsistent with the Born rule.

Why do you think you can't get a result inconsistent with the Born rule in one world.

I don't think that. It is just that it is very unlikely -- of low probability. It has probability one with MWI.

It doesn't have probability one for you, or for any other experimenter.  For any given experimenter it has the same probability as in SWI.



What do you mean by "inconsistent".  The results are probabilistic so they will have degrees of consistency and inconsistency with the Born rule...just as there is a spread of results in MWI.

There are no probabilities in MWI. The probability of getting an anomalous set of results for a sequence of measurements of z-spin up for repeated measurement on x-spin up particles is calculable in quantum mechanics. But the result is different in MWI since the probability for any sequence whatsoever is one.

For any sequence.  But not for any sequence seen by you.  The tests are assumed to be independent, indentically distrirbuted, so a sequence of tests must have all the same statistics are as an ensemble including all possible results, i.e. a series of experiments is ergodic.




And we cannot ensure that we are not such observers. So how can we claim that our theory is confirmed by the data? The data are consistent with all possible theories -- or none at all.

But it's not all or nothing.  It's statistics.

So if we see anomalous results at the LHC we continue gathering data to ascertain whether it is a real effect, or merely a statistical anomaly. This possibility is not available in MWI (even though people pretend that it is).

?? MWI is just an interpretation.  It doesn't change the possible things we can do.


MWI cannot explain the consistency of the statistical results we observe.

Sure it can.  Any sequence of results in MWI one observes can be observed in SWI, and with the same probability for any given observer.

Brent


Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Philip Thrift

unread,
Feb 8, 2020, 3:35:44 PM2/8/20
to Everything List


On Saturday, February 8, 2020 at 1:38:05 PM UTC-6, Brent wrote:



?? MWI is just an interpretation. 


Most of what I read in the Everett context refers to a theory:


Hugh Everett III’s relative-state formulation of quantum mechanics is a proposal for solving the quantum measurement problem by dropping the collapse dynamics from the standard von Neumann-Dirac formulation of quantum mechanics. Everett intended to recapture the predictions of the standard collapse theory by explaining why observers nevertheless get determinate measurement records that satisfy the standard quantum statistics. There has been considerable disagreement over the precise content of his theory and how it was suppose to work. Here we will consider how Everett himself presented the theory, then briefly compare his presentation to the many-worlds interpretation and other no-collapse options.



Mad-Dog Everettianism: Quantum Mechanics at Its Most Minimal

(Submitted on 23 Jan 2018)
To the best of our current understanding, quantum mechanics is part of the most fundamental picture of the universe. It is natural to ask how pure and minimal this fundamental quantum description can be. The simplest quantum ontology is that of the Everett or Many-Worlds interpretation, based on a vector in Hilbert space and a Hamiltonian. Typically one also relies on some classical structure, such as space and local configuration variables within it, which then gets promoted to an algebra of preferred observables. We argue that even such an algebra is unnecessary, and the most basic description of the world is given by the spectrum of the Hamiltonian (a list of energy eigenvalues) and the components of some particular vector in Hilbert space. Everything else - including space and fields propagating on it - is emergent from these minimal elements.

@philipthrift 

Brent Meeker

unread,
Feb 8, 2020, 3:48:22 PM2/8/20
to everyth...@googlegroups.com
The problem of SW theories is that they had to postulate measurement as special kind of random event, which seemed at first to be defined only in relation to the mind of the measurer.   So it got tangled up with the mind-body problem.  This was largely relieved by decoherence theory which explained measurement as a purely physical process.  If decoherence theory had been better developed before Everett, MWI might never have become an attractive interpretation.  MWI got rid of the special random event by postulating that all results happened, just to different copies of the experimenter or intstrument.  But it still left a gap as to what physically constituted the branching process and how did this process result in the Born rule.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Philip Thrift

unread,
Feb 8, 2020, 4:13:08 PM2/8/20
to Everything List


One stochastic single-world theory recently in arXiv:


Evolving Realities for Quantum Measure Theory

(Submitted on 27 Sep 2018)
We introduce and explore Rafael Sorkin's \textit{evolving co-event scheme}: a theoretical framework for determining completely which events do and do not happen in evolving quantum, or indeed classical, systems. The theory is observer-independent and constructed from discrete histories, making the framework a potential setting for discrete quantum cosmology and quantum gravity, as well as ordinary discrete quantum systems. The foundation of this theory is Quantum Measure Theory, which generalises (classical) measure theory to allow for quantum interference between alternative histories; and its co-event interpretation, which describes whether events can or can not occur, and in what combination, given a system and a quantum measure. In contrast to previous co-event schemes, the evolving co-event scheme is applied in stages, in the stochastic sense, without any dependence on later stages, making it manifestly compatible with an evolving block view. It is shown that the co-event realities produced by the basic evolving scheme do not depend on the inclusion or exclusion of zero measure histories in the history space, which follows non-trivially from the basic rules of the scheme. It is also shown that this evolving co-event scheme will reduce to producing classical realities when it is applied to classical systems.



Apparently young Henry Wilkes recently finished his PhD thesis:



The form and interpretation of the decoherence functional
Wilkes, Henry Luka

Abstract:
In this thesis we will explore the development of a realist quantum theory based on the decoherence functional using the co-event interpretation of Quantum Measure Theory. The Sum-Over-Histories theory of quantum mechanics will provide the bedding for a Hilbert-space-free stochastic-like theory that can accommodate spacetime-like objects, and can therefore be applied to quantum gravity and cosmology, as well as give an alternative perspective on quantum phenomena. The primitive objects of the theory are histories, which give different accounts of a system's evolution, and the decoherence functional, which sums the quantum interference between these histories. Quantum Measure Theory and Generalised Quantum Mechanics (a theory close to Decoherent Histories) then give alternative interpretations of the decoherence functional's relation to reality. In these theories, the decoherence functional is mathematically constrained in analogue to probability measures. However, one of the conditions, called weak positivity, can be lost under composition of isolated systems. We will extend this composition argument to take the case for a stronger condition of strong positivity for decoherence functionals. The bulk of the report will then focus on the co-event interpretation of Quantum Measure Theory, where co-events give full accounts of which events do or do not occur for a given system. The quantum nature of reality is expressed through the breaking of classical logic within these co-event descriptions. We will focus on evolving co-event schemes, which dynamically construct co-events to describe the reality of an evolving system in tandem with its progression. The evolving co-event schemes will be shown to reproduce classical logic when they are applied to classical systems. Moreover, similar to classical stochastic theory, these schemes will be shown to be invariant under the inclusion or exclusion of non-interfering histories. We will also explore a number of outstanding problems for these schemes, and will propose some potential modifications.


@philipthrift

Bruce Kellett

unread,
Feb 8, 2020, 5:13:12 PM2/8/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.


However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model. Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

Bruce

Brent Meeker

unread,
Feb 8, 2020, 5:48:00 PM2/8/20
to everyth...@googlegroups.com


On 2/8/2020 2:12 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.

You keep saying that; but you're misreferencing what "that" is.  The probability of any given observer seeing the low probability event is just that low probability.  "That" isn't unity.




However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

That's why I said that to make it work one needs to postulate that there are many more branches than possible results, so that results can be "weighted" by having more representation in the ensemble of branches.  Then probabilities are then proportional to branch count.  That gives a definite physical meaning to probabilities in MWI.  It's a physical model that provides "weights".  BUT it's a cheat as far as saying MWI implies or derives the Born rule.  The rule has been slipped in by hand.



In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes

I don't understand what you mean by that.  Are you limiting this to a binomial experiment, with H's and T's?  And are you assuming that at every trial each outcome occurs exactly once in the multiverse?


: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model.

Are you saying the data he obtains has no probabilistic relation to the ensemble of possible outcomes?  You seem to be putting the Bayesian inference backwards.  The data he has is in some sense independent of any model.  But he's evaluating his model given the data.  That fact that this doesn't change the data is the same in any interpretation.


Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

Sure it can.  The data can imply a low posterior probability for a given model.  The experimenter has gotten one particular result.  It is irrelevant that other results occurred to other copies of the experimenter.



The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

They are parameters to the hypothetical model to be evaluated by calculating their posterior probability given the observed results.  All possible results don't occur in any branch.  They occur in other branched to other observers and that influences the result no more than supposing the results are drawn from some ensemble.

Brent


Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 8, 2020, 6:21:24 PM2/8/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 9:48 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 2:12 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.

You keep saying that; but you're misreferencing what "that" is.  The probability of any given observer seeing the low probability event is just that low probability.  "That" isn't unity.

It is unity if the hypothesis is that every outcomes occurs for every trial. It is not a matter of any arbitrary observer -- it is that there is an observer who definitely sees that result.

However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

That's why I said that to make it work one needs to postulate that there are many more branches than possible results, so that results can be "weighted" by having more representation in the ensemble of branches.  Then probabilities are then proportional to branch count.  That gives a definite physical meaning to probabilities in MWI.  It's a physical model that provides "weights".  BUT it's a cheat as far as saying MWI implies or derives the Born rule.  The rule has been slipped in by hand.


It certainly is a cheat. And it is a different model. It is not just an interpretation of QM -- it is a different model, incompatible with Everett. Everett is quite clear: he postulates one branch -- one 'relative state'  -- for each component of a quantum superposition. This is incompatible with multiple branches for each such component.
In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes

I don't understand what you mean by that.  Are you limiting this to a binomial experiment, with H's and T's?  And are you assuming that at every trial each outcome occurs exactly once in the multiverse?

Did you not see that I speak of 'n' possible outcomes for every experiment? It is by no means limited to binary outcomes. And yes, I am following Everett and assuming that each trial outcomes occurs exactly once in the multiverse. If you go beyond this, then you are talking about a different, non-Everettian model. I think that most of your comments are based on your assumption that an uncountable infinity of branches is associated with each possible outcome (to accommodate all real weights). That is why we seem to be constantly talking at cross purposes -- you have not made you assumptions clear.

: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model.

Are you saying the data he obtains has no probabilistic relation to the ensemble of possible outcomes?  You seem to be putting the Bayesian inference backwards.  The data he has is in some sense independent of any model.  But he's evaluating his model given the data.  That fact that this doesn't change the data is the same in any interpretation.

The point is that the data are independent of any probabilistic model -- given a strict Everettian interpretation of the relative states and branching. Thus the data cannot be used to evaluate any such model.


Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

Sure it can.  The data can imply a low posterior probability for a given model.  The experimenter has gotten one particular result.  It is irrelevant that other results occurred to other copies of the experimenter.

That is only if probabilities and branch weights have an objective meaning. My contention is that they do not in a strictly Everettian model.
The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

They are parameters to the hypothetical model to be evaluated by calculating their posterior probability given the observed results.  All possible results don't occur in any branch.  They occur in other branched to other observers and that influences the result no more than supposing the results are drawn from some ensemble.

Again, you seem to be implicitly relying on the assumption that branch weights actually exist and have objective meaning. In other words, your comments presume your idea of implementing probabilities as branch counts. This is a different model. It is not implicit in the Schrodinger equation, and it is certainly not what Everett envisaged.

Bruce

Stathis Papaioannou

unread,
Feb 8, 2020, 6:21:45 PM2/8/20
to everyth...@googlegroups.com
On Sun, 9 Feb 2020 at 09:13, Bruce Kellett <bhkel...@gmail.com> wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.


However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

Do you have a fundamental problem with probabilities where every outcome occurs? For example, if you are told you have been copied 999 times at location A and once at location B, would you not guess that you are most likely one of the copies at location A?

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model. Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
--
Stathis Papaioannou

Brent Meeker

unread,
Feb 8, 2020, 7:08:05 PM2/8/20
to everyth...@googlegroups.com


On 2/8/2020 3:21 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 9:48 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 2:12 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.

You keep saying that; but you're misreferencing what "that" is.  The probability of any given observer seeing the low probability event is just that low probability.  "That" isn't unity.

It is unity if the hypothesis is that every outcomes occurs for every trial. It is not a matter of any arbitrary observer -- it is that there is an observer who definitely sees that result.

Not "that result" = "every outcome occurs".  It's that given an outcome, there is an observer who sees it.  And given an outcome there is only a probability P(outcome_i) that you see it.



However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

That's why I said that to make it work one needs to postulate that there are many more branches than possible results, so that results can be "weighted" by having more representation in the ensemble of branches.  Then probabilities are then proportional to branch count.  That gives a definite physical meaning to probabilities in MWI.  It's a physical model that provides "weights".  BUT it's a cheat as far as saying MWI implies or derives the Born rule.  The rule has been slipped in by hand.


It certainly is a cheat. And it is a different model. It is not just an interpretation of QM -- it is a different model, incompatible with Everett. Everett is quite clear: he postulates one branch -- one 'relative state'  -- for each component of a quantum superposition. This is incompatible with multiple branches for each such component.
In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes

I don't understand what you mean by that.  Are you limiting this to a binomial experiment, with H's and T's?  And are you assuming that at every trial each outcome occurs exactly once in the multiverse?

Did you not see that I speak of 'n' possible outcomes for every experiment? It is by no means limited to binary outcomes. And yes, I am following Everett and assuming that each trial outcomes occurs exactly once in the multiverse. If you go beyond this, then you are talking about a different, non-Everettian model. I think that most of your comments are based on your assumption that an uncountable infinity of branches is associated with each possible outcome (to accommodate all real weights). That is why we seem to be constantly talking at cross purposes -- you have not made you assumptions clear.

I was trying to address both at once.  But, yes I think Bruno's idea of a MWI, as well as other people's, requires a very large number of branches; but not a realized infinity, because that makes it impossible to assign a measure.



: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model.

Are you saying the data he obtains has no probabilistic relation to the ensemble of possible outcomes?  You seem to be putting the Bayesian inference backwards.  The data he has is in some sense independent of any model.  But he's evaluating his model given the data.  That fact that this doesn't change the data is the same in any interpretation.

The point is that the data are independent of any probabilistic model -- given a strict Everettian interpretation of the relative states and branching. Thus the data cannot be used to evaluate any such model.

What are you calling "the data"?  All the branch results, one per result? All the branch results with weighting by multiple branches for the same result? The observations of some particular observer?  It seems your conclusion only applies to the first.




Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

Sure it can.  The data can imply a low posterior probability for a given model.  The experimenter has gotten one particular result.  It is irrelevant that other results occurred to other copies of the experimenter.

That is only if probabilities and branch weights have an objective meaning. My contention is that they do not in a strictly Everettian model.
The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

They are parameters to the hypothetical model to be evaluated by calculating their posterior probability given the observed results.  All possible results don't occur in any branch.  They occur in other branched to other observers and that influences the result no more than supposing the results are drawn from some ensemble.

Again, you seem to be implicitly relying on the assumption that branch weights actually exist and have objective meaning. In other words, your comments presume your idea of implementing probabilities as branch counts. This is a different model. It is not implicit in the Schrodinger equation, and it is certainly not what Everett envisaged.

But it is implicit, or even explicit in Bruno's model.  It's also consistent with Barbour's model.  My criticism of it is that by requiring this multiple branching, so you need two branches if Pup=1/2, Pdwn=1/2 but you need a thousand branches if Pup=501/1000, Pdwn=499/1000, you have now resorted to something outside Schroedinger's equation and you have to put in Born's rule by hand.  But in Bruno's theory he begins by assuming a potential infinity of computational threads, which then branch from identical bundles.  Whether he can get QM and Born rule remains to be seen.

Brent


Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 8, 2020, 9:42:37 PM2/8/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 10:21 AM Stathis Papaioannou <stat...@gmail.com> wrote:
On Sun, 9 Feb 2020 at 09:13, Bruce Kellett <bhkel...@gmail.com> wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.


However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

Do you have a fundamental problem with probabilities where every outcome occurs?

I thought I had made it clear that I do not think that any meaningful notion of probability can be defined in that case: such as in Everett's model where there is just one branch for each term in the original superposition -- i.e., all outcomes occur just once on each trial.

For example, if you are told you have been copied 999 times at location A and once at location B, would you not guess that you are most likely one of the copies at location A?

No. For to make such a guess would be to assume a dualist model of personal identity: viz., that I have an immortal soul that is not duplicated with my body, but assigned at random to one of the duplicates. I do not believe this, nor do I believe that any concept of probability is relevant to your presumed scenario.

Bruce

Bruce Kellett

unread,
Feb 8, 2020, 9:54:04 PM2/8/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 11:08 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 3:21 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 9:48 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 2:12 PM, Bruce Kellett wrote:
On Sun, Feb 9, 2020 at 6:38 AM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/7/2020 11:00 PM, Bruce Kellett wrote:

It is an indexical theory. The problem is that in MWI there will always be observers who see the sequences that are improbable according to the Born rule. This is not the case in the single-world theory. There is no random sampling from all possibilities in the single-world theory.

?? There's something deterministic in single-world QM?  You seem to have taken the position that MWI is not just an interpretation, but a different theory.

That is a possibility. I do think that MWI has difficulty with probability, and with accounting for the results of normal observation.

That some very improbable results cannot occur in SW QM.  I think you are mistaken.

I don't know where you got the idea that I might think this.


  No matter how low a probability the Born rule assigns to a result, that result could occur on the first trial.


Yes, but in SW the probability of that is very low: in MWI the probability for that is unity.

You keep saying that; but you're misreferencing what "that" is.  The probability of any given observer seeing the low probability event is just that low probability.  "That" isn't unity.

It is unity if the hypothesis is that every outcomes occurs for every trial. It is not a matter of any arbitrary observer -- it is that there is an observer who definitely sees that result.

Not "that result" = "every outcome occurs".  It's that given an outcome, there is an observer who sees it.

Don't twist things around.
  And given an outcome there is only a probability P(outcome_i) that you see it.

Since it is the existence of such a probability, P(outcome_i) that is in question, your comment begs the question.
However, we seem to be in danger of going round in circles on this, so it might be time to try a new tack.

As I said, I have difficulty understanding how the concept of probability can make sense when all results occur in every trial. If you have N independent repetitions of an interaction or experiment that has n possible outcomes, the result, if every outcome occurs every time, is a set of n^N sequences of results. The question is "How does probability fit into such a picture?" 

In any branch, when the experiment is performed, that branch is deleted and replaced by n new branches, one for each possible outcome of the experiment. This is clearly independent of any model for the probability associated with each outcome. In the literature, people speak about "weights of branches". But what does this mean? -- that there are more of some types of branch?, or that some branches are more 'important' that others? It does not seem clear to me that one can assign any operational meaning to such a concept of "branch weights".

That's why I said that to make it work one needs to postulate that there are many more branches than possible results, so that results can be "weighted" by having more representation in the ensemble of branches.  Then probabilities are then proportional to branch count.  That gives a definite physical meaning to probabilities in MWI.  It's a physical model that provides "weights".  BUT it's a cheat as far as saying MWI implies or derives the Born rule.  The rule has been slipped in by hand.


It certainly is a cheat. And it is a different model. It is not just an interpretation of QM -- it is a different model, incompatible with Everett. Everett is quite clear: he postulates one branch -- one 'relative state'  -- for each component of a quantum superposition. This is incompatible with multiple branches for each such component.
In this situation, the set of n^N sequences of results for this series of trials is independent of any a priori assignment of probabilities to individual outcomes

I don't understand what you mean by that.  Are you limiting this to a binomial experiment, with H's and T's?  And are you assuming that at every trial each outcome occurs exactly once in the multiverse?

Did you not see that I speak of 'n' possible outcomes for every experiment? It is by no means limited to binary outcomes. And yes, I am following Everett and assuming that each trial outcomes occurs exactly once in the multiverse. If you go beyond this, then you are talking about a different, non-Everettian model. I think that most of your comments are based on your assumption that an uncountable infinity of branches is associated with each possible outcome (to accommodate all real weights). That is why we seem to be constantly talking at cross purposes -- you have not made you assumptions clear.

I was trying to address both at once.  But, yes I think Bruno's idea of a MWI, as well as other people's, requires a very large number of branches; but not a realized infinity, because that makes it impossible to assign a measure.

So they are not talking about Everettian QM.

: whatever the probabilities or weights, the set of sequences of results is the same. In other words, for the experimentalist, the data he has to work with is the same for any presumed underlying probabilistic model.

Are you saying the data he obtains has no probabilistic relation to the ensemble of possible outcomes?  You seem to be putting the Bayesian inference backwards.  The data he has is in some sense independent of any model.  But he's evaluating his model given the data.  That fact that this doesn't change the data is the same in any interpretation.

The point is that the data are independent of any probabilistic model -- given a strict Everettian interpretation of the relative states and branching. Thus the data cannot be used to evaluate any such model.

What are you calling "the data"?  All the branch results, one per result? All the branch results with weighting by multiple branches for the same result? The observations of some particular observer?  It seems your conclusion only applies to the first.

The first: the set of branches generated by assigning one new branch to each result in each trial of the experiment. So of course my observations apply only to this case. This is the model proposed by Everett, and I was exploring whether the concept of probability made sense in this simple model. My conclusion is that it does not. Consequently, Everett fails as a scientific theory.
Consequently, experimental data cannot be used to infer any probabalistic model. In particular, experimental data cannot be used to test any prior theory one might have about the probabilities of particular outcomes from individual experiments.

Sure it can.  The data can imply a low posterior probability for a given model.  The experimenter has gotten one particular result.  It is irrelevant that other results occurred to other copies of the experimenter.

That is only if probabilities and branch weights have an objective meaning. My contention is that they do not in a strictly Everettian model.
The conclusion would be that such a model is unable to account for standard scientific practice, in which we definitely use experimental data to test our theories, and as the basis for developing new and improved theories. This is impossible on the above understanding of MWI.

So this understanding of MWI is presumably flawed. But how? I do not see any other realistic way to implement the idea that all possible results occur in any trial. Talking about branch weights and probabilities seems to be entirely irrelevant because these things have no operational significance in such a model.

They are parameters to the hypothetical model to be evaluated by calculating their posterior probability given the observed results.  All possible results don't occur in any branch.  They occur in other branched to other observers and that influences the result no more than supposing the results are drawn from some ensemble.

Again, you seem to be implicitly relying on the assumption that branch weights actually exist and have objective meaning. In other words, your comments presume your idea of implementing probabilities as branch counts. This is a different model. It is not implicit in the Schrodinger equation, and it is certainly not what Everett envisaged.

But it is implicit, or even explicit in Bruno's model.  It's also consistent with Barbour's model.

It can be consistent with as many models as you like. It is simply not Everettian QM. It is some ad hoc concoction that totally undermines the point that was the basic attraction to Everett in the first place. People like Carroll and Wallace laud Everett because they see it as quantum mechanics in the raw -- the Schrodinger equation without extraneous additional assumptions. You seem bent on adding all these extraneous assumption, most of which are not even consistent with the Schrodinger equation, and still claim that you are talking about the same model.

 
 My criticism of it is that by requiring this multiple branching, so you need two branches if Pup=1/2, Pdwn=1/2 but you need a thousand branches if Pup=501/1000, Pdwn=499/1000, you have now resorted to something outside Schroedinger's equation and you have to put in Born's rule by hand.

I agree. Any such addition is ad hoc and ugly. And it probably doesn't even work if you examine it closely.

 But in Bruno's theory he begins by assuming a potential infinity of computational threads, which then branch from identical bundles.  Whether he can get QM and Born rule remains to be seen.

Of course he can't. He can't get any real physics from his models. And it is very doubtful if he has actually ever proved anything of value.

Bruce

Brent Meeker

unread,
Feb 8, 2020, 10:33:40 PM2/8/20
to everyth...@googlegroups.com


On 2/8/2020 6:53 PM, Bruce Kellett wrote:
But it is implicit, or even explicit in Bruno's model.  It's also consistent with Barbour's model.

It can be consistent with as many models as you like. It is simply not Everettian QM. It is some ad hoc concoction that totally undermines the point that was the basic attraction to Everett in the first place. People like Carroll and Wallace laud Everett because they see it as quantum mechanics in the raw -- the Schrodinger equation without extraneous additional assumptions. You seem bent on adding all these extraneous assumption, most of which are not even consistent with the Schrodinger equation, and still claim that you are talking about the same model.

I think Everett assumed Born's rule as a kind of weight attached to each branch; so there was only one branch per result and the Born rule was assumed.  It is only later that the purists, who wanted to say MWI is only the Schroedinger equation, have undertaken to prove the Born rule follows from it with only some "obvious" additional assumption (like the decision theoretic "proofs").  As far as I know, all of them have begged the question in that their additional obvious assumption is no better than just assuming the Born rule...which at least follows from Gleason's theorem once you assume the theory returns a probability.

Brent

Stathis Papaioannou

unread,
Feb 8, 2020, 11:26:12 PM2/8/20
to everyth...@googlegroups.com


On 9 Feb 2020, at 13:42, Bruce Kellett <bhkel...@gmail.com> wrote:



Strange that you should say that, since in the philosophical literature (eg. Derek Parfit) the position you describe as dualist is called “reductionist”, assuming there is no soul and the mind is duplicated along with the body. Anyway, you would not do well if you assumed this in a world where duplication occurred commonly. If you were rewarded if you bet correctly and punished if you bet incorrectly, the world would come to be dominated by people who assume in the above scenario they have a 99.9% chance of finding themselves at A.

Bruce Kellett

unread,
Feb 8, 2020, 11:40:31 PM2/8/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 2:33 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 6:53 PM, Bruce Kellett wrote:
But it is implicit, or even explicit in Bruno's model.  It's also consistent with Barbour's model.

It can be consistent with as many models as you like. It is simply not Everettian QM. It is some ad hoc concoction that totally undermines the point that was the basic attraction to Everett in the first place. People like Carroll and Wallace laud Everett because they see it as quantum mechanics in the raw -- the Schrodinger equation without extraneous additional assumptions. You seem bent on adding all these extraneous assumption, most of which are not even consistent with the Schrodinger equation, and still claim that you are talking about the same model.

I think Everett assumed Born's rule as a kind of weight attached to each branch; so there was only one branch per result and the Born rule was assumed.

Well, Everett did have something that he considered a derivation of the Born rule. He looked for a weight to attach to each branch and assumed that it was a function of the branch amplitudes. The squared modulus of the amplitude gave an additive measure that he took as a probability.

But the argument that I have given shows that attaching such a measure or weight to each branch achieves nothing, since the branching structure with one branch for each component of a superposition does not leave any room for such a label attached to each branch to have any operational effect. Whatever label one attaches to branches, one still gets the same set of branches and the same limitations apply.


It is only later that the purists, who wanted to say MWI is only the Schroedinger equation, have undertaken to prove the Born rule follows from it with only some "obvious" additional assumption (like the decision theoretic "proofs").  As far as I know, all of them have begged the question in that their additional obvious assumption is no better than just assuming the Born rule...which at least follows from Gleason's theorem once you assume the theory returns a probability.

I agree that the purist attempts to derive the Born rule from the SE plus some simple assumptions have generally failed. Adrian Kent and David Albert have devoted some effort to showing how they failed -- especial the Deutsch-Wallace ideas based on decision theory. But this is a bit beside the point too. My main concern was to show that the basic Everett idea with one branch per possible outcome does not lead to a viable physical theory. The full details are given in the excepts from Adrian Kent that I started this discussion with.

Bruce

Bruce Kellett

unread,
Feb 8, 2020, 11:43:42 PM2/8/20
to everyth...@googlegroups.com
They may end up dominating -- but possibly that is only because, by construction, there are more going to A. As with Bruno's W/M duplication, there is an unresolved question of personal identity at stake here, and your solution is not necessarily correct.

Bruce

Bruce Kellett

unread,
Feb 9, 2020, 1:22:20 AM2/9/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 2:33 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 2/8/2020 6:53 PM, Bruce Kellett wrote:
But it is implicit, or even explicit in Bruno's model.  It's also consistent with Barbour's model.

It can be consistent with as many models as you like. It is simply not Everettian QM. It is some ad hoc concoction that totally undermines the point that was the basic attraction to Everett in the first place. People like Carroll and Wallace laud Everett because they see it as quantum mechanics in the raw -- the Schrodinger equation without extraneous additional assumptions. You seem bent on adding all these extraneous assumption, most of which are not even consistent with the Schrodinger equation, and still claim that you are talking about the same model.

I think Everett assumed Born's rule as a kind of weight attached to each branch; so there was only one branch per result and the Born rule was assumed.

This is close to what Everett did. But Kent also considers a toy universe of this sort. As long as there is only one branch per result, weights such as those Everett proposed are purely decorative -- they fulfil no functional role and the arguments against the "no weight' multiverse goes through unchanged: weights of the sort Everett proposed do not solve the problem of probability, not do they make the theory scientific in the sense that data can be used to test he theory.

Kent also considers a toy universe more along the lines of the one you propose. When an experiment is performed, the universe is deleted, and successor universes created in which the outcomes are treated differently: for example, there are more new universe created for some results than for others. The number of such successors may be large, or even infinite. In the absence of data to give probability estimates, the inhabitants have no way to detect that the outcomes are being treated differently, or how many successor universes are being created. However, after N runs, there will be a large number of branches.

Kent is not convinced that this is enough, but he feels that it might be a step in the right direction. "If we could argue, perhaps using some form of anthropic reasoning, that there is an equal chance of finding oneself in any one of the branches, then the chance of finding oneself in a branch in which one concludes that the branch weights are close to the number of identical branches created for each result would be very close to one." He concludes: "It seems hard to make this argument rigorous. In particular, the notion of 'chance of finding oneself' in a particular branch doesn't seem easy to define properly. Still, we have an arguably natural measure on branches, the counting measure, according to which most of the inhabitants will arrive at (close to)  the right theory os branch weights. That might perhaps be progress."

The trouble, of course, is that any such theory with multiple replicated branches for each experimental result goes far beyond Everett's ideas, and is certainly not a natural extension of the Schrodinger equation.

Other theories about the origin of weights and probabilities in Many-worlds theory seem all to fall foul of Zurek's observation that such arguments are inherently circular: they rely on distinguishable observers in distinguishable branches, and such can arise only through the processes of decoherence and the approximate diagonalization of the density matrix. And, of course, such arguments all depend on the notion that small amplitudes correspond to low probabilities -- which is just the Born rule.

Bruce

Stathis Papaioannou

unread,
Feb 9, 2020, 1:48:27 AM2/9/20
to everyth...@googlegroups.com
Basically there are two theories if personal identity: the magical soul theory, which holds that your soul only goes into one body, and the reductionist theory, which holds that your mind is copied along with your body and each copy has equal claim to being a continuation of the original.
--
Stathis Papaioannou

Bruce Kellett

unread,
Feb 9, 2020, 2:11:33 AM2/9/20
to everyth...@googlegroups.com
There is also Nozick's closest continuer theory, in which new persons are created in such duplication scenarios. The issue is unlikely to be resolved until we have actual experience of body-mind duplication. And that is a long way off......

Bruce

smitra

unread,
Feb 9, 2020, 3:48:21 AM2/9/20
to everyth...@googlegroups.com
On 08-02-2020 07:00, Bruce Kellett wrote:
> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>
>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>
>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>> account for everyday experience. A stochastic single-world theory
>> is
>>> perfectly able to account for what we see.
>>>
>>> Bruce
>>
>> Stochastic single word theories make predictions that violate those
>> of
>> quantum mechanics.
>
> No they don't. When have violations of the quantum predictions been
> observed?

A single world theory must violate unitary time evolution, it has to
assume a violation of the Schrodinger equation. But there is no
experimental evidence for violations of the Schrodinger equation. While
one can make such assumptions and develop a formalism based on this, the
issue is then that in the absence of experimental proof that the
Schrodinger equation is going to be violated, one should not claim that
such a model is superior than another model that doesn't imply any new
physics.

The MWI may have some philosophical weaknesses like the derivation of
the Born rule but the pragmatic variant of it where you just assume the
Born rule is clearly superior to any other model where you're going to
just assume that the known laws of physics are going to be violated to
get to a model that to you looks more desirable from a philosophical
point of view.

>
>> If the MWI (in the general sense of there existing a
>> multiverse rather than any details of how to derive the Born rule)
>> is
>> not correct, then that's hard to reconcile with known experimental
>> results.
>
> All experimental results to date are consistent with a single-world
> theory. There are several possibilities for such a theory, but to
> date, experiment does not distinguish between them.

Single world theories require a violation of unitary time evolution of a
perfectly isolated system. No experiment has ever observed this.
>
>> New physics that so far has never been observed needs to be
>> assumed just to get rid of the Many Worlds. Also, this new physics
>> should appear not at the as of yet unprobed high energies where the
>> known laws of physics could plausibly break down, instead it would
>> have
>> to appear at the mesoscopic or macroscopic scale where the laws of
>> physics are essentially fixed.
>
> Bohm's theory does not require as-yet-unobserved new physics. GRW do
> postulate a new physical interaction, but that is below the level of
> current experimental detectability.

Bohm theory is not equivalent to QM, it only becomes equivalent to QM if
one imposes a condition known as "quantum equilibrium". In general, Bohm
theory in a condition of quantum non-equilibrium leads to violations of
the Born rule. See here for details:

https://en.wikipedia.org/wiki/Quantum_non-equilibrium

Then without any experimental evidence for the additional features of
Bohm theory such as the signatures of quantum non-equilibrium, why would
be prefer it over and above a theory that doesn't make such assumptions?
One would have to have very strong theoretical objections against the
theory. In case of the Standard Model one can predict that it will break
down at very high energies. But I don't see why the MWI in the pragmatic
sense where one assumes the Born rule is so bad that it merits
considering alternative theories, particularly if those alternative
theories make lots of unverified assumptions about new physics in
domains where new physics is thought to be unlikely to appear.
>
> Besides, why should you assume that the Schrodinger equation is the
> ultimate physical law?

It may be false, but absent experimental evidence that it is indeed
false, theories that imply that it's false shouldn't get the benefit of
the doubt just because they imply a single world.

Saibal

> Bruce
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAFxXSLSg%3D1SW4xkNeMkeyuhZo22P6FhoL%3D7u9RFGai%3DqbyH5pQ%40mail.gmail.com
> [1].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/everything-list/CAFxXSLSg%3D1SW4xkNeMkeyuhZo22P6FhoL%3D7u9RFGai%3DqbyH5pQ%40mail.gmail.com?utm_medium=email&utm_source=footer

Bruce Kellett

unread,
Feb 9, 2020, 5:37:51 AM2/9/20
to everyth...@googlegroups.com
On Sun, Feb 9, 2020 at 7:48 PM smitra <smi...@zonnet.nl> wrote:
On 08-02-2020 07:00, Bruce Kellett wrote:
> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>
>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>
>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>> account for everyday experience. A stochastic single-world theory
>> is
>>> perfectly able to account for what we see.
>>>
>>> Bruce
>>
>> Stochastic single word theories make predictions that violate those
>> of
>> quantum mechanics.
>
> No they don't. When have violations of the quantum predictions been
> observed?

A single world theory must violate unitary time evolution, it has to
assume a violation of the Schrodinger equation. But there is no
experimental evidence for violations of the Schrodinger equation. While
one can make such assumptions and develop a formalism based on this, the
issue is then that in the absence of experimental proof that the
Schrodinger equation is going to be violated, one should not claim that
such a model is superior than another model that doesn't imply any new
physics.

So what. If Everettian QM doesn't work, as it has been shown to fail in that is does not recover normal scientific practice, then one must look to alternative theories. I have not advocated any particular theory, but a break down of unitary evolution is not such a big deal -- it is what we observe every day, after all. This is the heart of the quantum measurement problem.


The MWI may have some philosophical weaknesses like the derivation of
the Born rule but the pragmatic variant of it where you just assume the
Born rule is clearly superior to any other model where you're going to
just assume that the known laws of physics are going to be violated to
get to a model that to you looks more desirable from a philosophical
point of view.

The trouble is that even postulating the Born rule, ad hoc as in Copenhagen, does not get you out of the problems with Everett. As long as one follows Everett and assumes one branch for each component of the superposition, one is going to fail to explain normal scientific practice. If one follows Brent and Bruno and assumes that there are multiple branches for each experimental result, then one has lost touch with the Schrodinger equation anyway, Everett is out of the window, and there are still problems with the definition of probability.

It is probably a matter of which is the least bad theory at the moment. None of the available approaches is entirely satisfactory. But that is not an unusual situation in the development of physics.....
There is not evidence for any of this type of worry, either. So why bring it up?

Then without any experimental evidence for the additional features of
Bohm theory such as the signatures of quantum non-equilibrium, why would
be prefer it over and above a theory that doesn't make such assumptions?
One would have to have very strong theoretical objections against the
theory. In case of the Standard Model one can predict that it will break
down at very high energies. But I don't see why the MWI in the pragmatic
sense where one assumes the Born rule is so bad that it merits
considering alternative theories, particularly if those alternative
theories make lots of unverified assumptions about new physics in
domains where new physics is thought to be unlikely to appear.

Who says so? Sounds like special pleading to me.....
>
> Besides, why should you assume that the Schrodinger equation is the
> ultimate physical law?

It may be false, but absent experimental evidence that it is indeed
false, theories that imply that it's false shouldn't get the benefit of
the doubt just because they imply a single world.


Maybe single world theories are better adapted to explaining our ordinary experience -- and explaining everyday experience is, in the final analysis, the aim of any scientific theory.

Bruce

Alan Grayson

unread,
Feb 9, 2020, 10:55:22 AM2/9/20
to Everything List


On Sunday, February 9, 2020 at 3:37:51 AM UTC-7, Bruce wrote:
On Sun, Feb 9, 2020 at 7:48 PM smitra <smi...@zonnet.nl> wrote:
On 08-02-2020 07:00, Bruce Kellett wrote:
> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>
>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>
>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>> account for everyday experience. A stochastic single-world theory
>> is
>>> perfectly able to account for what we see.
>>>
>>> Bruce
>>
>> Stochastic single word theories make predictions that violate those
>> of
>> quantum mechanics.
>
> No they don't. When have violations of the quantum predictions been
> observed?

A single world theory must violate unitary time evolution, it has to
assume a violation of the Schrodinger equation. But there is no
experimental evidence for violations of the Schrodinger equation. While
one can make such assumptions and develop a formalism based on this, the
issue is then that in the absence of experimental proof that the
Schrodinger equation is going to be violated, one should not claim that
such a model is superior than another model that doesn't imply any new
physics.

So what. If Everettian QM doesn't work, as it has been shown to fail in that is does not recover normal scientific practice, then one must look to alternative theories. I have not advocated any particular theory, but a break down of unitary evolution is not such a big deal -- it is what we observe every day, after all. This is the heart of the quantum measurement problem.

You claim unitary evolution breaks down in the measurement problem. CMIIAW, but don't you also affirm decoherence theory where unitary evolution does NOT breakdown in the measurement process?  Which is it, or do I misconstrue your positions? TIA, AG 

Brent Meeker

unread,
Feb 9, 2020, 1:16:33 PM2/9/20
to everyth...@googlegroups.com


On 2/9/2020 12:48 AM, smitra wrote:
> On 08-02-2020 07:00, Bruce Kellett wrote:
>> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>>
>>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>>
>>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>>> account for everyday experience. A stochastic single-world theory
>>> is
>>>> perfectly able to account for what we see.
>>>>
>>>> Bruce
>>>
>>> Stochastic single word theories make predictions that violate those
>>> of
>>> quantum mechanics.
>>
>> No they don't. When have violations of the quantum predictions been
>> observed?
>
> A single world theory must violate unitary time evolution, it has to
> assume a violation of the Schrodinger equation. But there is no
> experimental evidence for violations of the Schrodinger equation.

Except for every measurement ever made of a quantum variable.

Brent

> While one can make such assumptions and develop a formalism based on
> this, the issue is then that in the absence of experimental proof that
> the Schrodinger equation is going to be violated, one should not claim
> that such a model is superior than another model that doesn't imply
> any new physics.
>
> The MWI may have some philosophical weaknesses like the derivation of
> the Born rule but the pragmatic variant of it where you just assume
> the Born rule is clearly superior to any other model where you're
> going to just assume that the known laws of physics are going to be
> violated to get to a model that to you looks more desirable from a
> philosophical point of view.
>
>>
>>> If the MWI (in the general sense of there existing a
>>> multiverse rather than any details of how to derive the Born rule)
>>> is
>>> not correct, then that's hard to reconcile with known experimental
>>> results.
>>
>> All experimental results to date are consistent with a single-world
>> theory. There are several possibilities for such a theory, but to
>> date, experiment does not distinguish between them.
>
> Single world theories require a violation of unitary time evolution of
> a perfectly isolated system. No experiment has ever observed this.

Because a perfectly isolated system can't be observed.
Even though a single world is a well confirmed and often repeated
empirical observation?

Brent

Brent Meeker

unread,
Feb 9, 2020, 1:20:17 PM2/9/20
to everyth...@googlegroups.com
I think Bruno's hope is to recover the Schroedinger equation as a kind of stat-mech limit of his universal dovetailer threads.  This might comport with Zurek's idea of quantum Darwinism.

Brent

Bruce Kellett

unread,
Feb 9, 2020, 4:43:08 PM2/9/20
to everyth...@googlegroups.com
And pigs might fly.......

Bruce

smitra

unread,
Feb 10, 2020, 1:08:52 AM2/10/20
to everyth...@googlegroups.com
The focus on Everettian QM to argue against MWI in general is a straw
man attack. The main issue is unitary time evolution. This is a rather
unambiguous thing that one can check in experiments. A breakdown of
unitary time evolution has never been observed.
>
>> The MWI may have some philosophical weaknesses like the derivation
>> of
>> the Born rule but the pragmatic variant of it where you just assume
>> the
>> Born rule is clearly superior to any other model where you're going
>> to
>> just assume that the known laws of physics are going to be violated
>> to
>> get to a model that to you looks more desirable from a philosophical
>>
>> point of view.
>
> The trouble is that even postulating the Born rule, ad hoc as in
> Copenhagen, does not get you out of the problems with Everett. As long
> as one follows Everett and assumes one branch for each component of
> the superposition, one is going to fail to explain normal scientific
> practice. If one follows Brent and Bruno and assumes that there are
> multiple branches for each experimental result, then one has lost
> touch with the Schrodinger equation anyway, Everett is out of the
> window, and there are still problems with the definition of
> probability.
>
> It is probably a matter of which is the least bad theory at the
> moment. None of the available approaches is entirely satisfactory. But
> that is not an unusual situation in the development of physics.....
>

This is an artifact of promoting "normal scientific practice" to a more
fundamental law of Nature than the actual laws of physics. Scientific
practice is an imperfect method, it may well have flaws, it may not
always yield toe correct outcome. So, being able to construct a
counterexample where some observers would draw the wrong conclusion if
the laws of physics would have a certain structure, is is not evidence
that the laws of physics cannot have that structure.
Quantum non-equilibrium is not just something that has not been
detected, it would also violate basic physical principles. This simply
illustrates that there are no good ways to get to a single world theory.
The laws of physics would have to be radically different from anything
we've seen.
>
>> Then without any experimental evidence for the additional features
>> of
>> Bohm theory such as the signatures of quantum non-equilibrium, why
>> would
>> be prefer it over and above a theory that doesn't make such
>> assumptions?
>> One would have to have very strong theoretical objections against
>> the
>> theory. In case of the Standard Model one can predict that it will
>> break
>> down at very high energies. But I don't see why the MWI in the
>> pragmatic
>> sense where one assumes the Born rule is so bad that it merits
>> considering alternative theories, particularly if those alternative
>> theories make lots of unverified assumptions about new physics in
>> domains where new physics is thought to be unlikely to appear.
>
> Who says so? Sounds like special pleading to me.....

Unitary time evolution would have to fail at larger scales where the
known dynamical laws of physics are even more solidly established.
>
>>>
>>> Besides, why should you assume that the Schrodinger equation is
>> the
>>> ultimate physical law?
>>
>> It may be false, but absent experimental evidence that it is indeed
>> false, theories that imply that it's false shouldn't get the benefit
>> of
>> the doubt just because they imply a single world.
>
> Maybe single world theories are better adapted to explaining our
> ordinary experience -- and explaining everyday experience is, in the
> final analysis, the aim of any scientific theory.

Appealing to ordinary experience is bad science. This is why it took
Lorentz several years to accept the Einsteinian interpretation of his
transformation laws.
>
> Bruce
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAFxXSLRJ%3DegSuY2VoYJKFaBP7N7_GODyOMYf2DAvVO-H5WoM2A%40mail.gmail.com
> [1].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/everything-list/CAFxXSLRJ%3DegSuY2VoYJKFaBP7N7_GODyOMYf2DAvVO-H5WoM2A%40mail.gmail.com?utm_medium=email&utm_source=footer

smitra

unread,
Feb 10, 2020, 1:17:15 AM2/10/20
to everyth...@googlegroups.com
On 09-02-2020 19:16, 'Brent Meeker' via Everything List wrote:
> On 2/9/2020 12:48 AM, smitra wrote:
>> On 08-02-2020 07:00, Bruce Kellett wrote:
>>> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>>>
>>>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>>>
>>>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>>>> account for everyday experience. A stochastic single-world theory
>>>> is
>>>>> perfectly able to account for what we see.
>>>>>
>>>>> Bruce
>>>>
>>>> Stochastic single word theories make predictions that violate those
>>>> of
>>>> quantum mechanics.
>>>
>>> No they don't. When have violations of the quantum predictions been
>>> observed?
>>
>> A single world theory must violate unitary time evolution, it has to
>> assume a violation of the Schrodinger equation. But there is no
>> experimental evidence for violations of the Schrodinger equation.
>
> Except for every measurement ever made of a quantum variable.

This os also explained by unitary time evolution as there observed
system is not an isolated system.
>
> Brent
>
>> While one can make such assumptions and develop a formalism based on
>> this, the issue is then that in the absence of experimental proof that
>> the Schrodinger equation is going to be violated, one should not claim
>> that such a model is superior than another model that doesn't imply
>> any new physics.
>>
>> The MWI may have some philosophical weaknesses like the derivation of
>> the Born rule but the pragmatic variant of it where you just assume
>> the Born rule is clearly superior to any other model where you're
>> going to just assume that the known laws of physics are going to be
>> violated to get to a model that to you looks more desirable from a
>> philosophical point of view.
>>
>>>
>>>> If the MWI (in the general sense of there existing a
>>>> multiverse rather than any details of how to derive the Born rule)
>>>> is
>>>> not correct, then that's hard to reconcile with known experimental
>>>> results.
>>>
>>> All experimental results to date are consistent with a single-world
>>> theory. There are several possibilities for such a theory, but to
>>> date, experiment does not distinguish between them.
>>
>> Single world theories require a violation of unitary time evolution of
>> a perfectly isolated system. No experiment has ever observed this.
>
> Because a perfectly isolated system can't be observed.

Observers interact locally with the observed system, so nothing would
change if the observed system plus observer were located inside a giant
isolated system. So, whatever observation is cannot funbdamentally
depend on the system not being perfectly isolated.
It's not confirmed and repeated. One has to do an experiment that can
distinguish between the alternative theories. Unitary time evolution is
easily falsifiable. What's wrong is to claim that an experiment that on
its own would be consistent with collapse is somehow evidence for
collapse if it is also consistent with unitary time evolution when
unitary time evolution and not collapse theories are consistent with the
totality of all the experimental results.

Saibal
>
> Brent

Brent Meeker

unread,
Feb 10, 2020, 1:27:02 AM2/10/20
to everyth...@googlegroups.com
?? What do you call the measurement implemented by a projection
operator.  Are there observations that don't involve a projection?

Brent

Brent Meeker

unread,
Feb 10, 2020, 1:37:06 AM2/10/20
to everyth...@googlegroups.com


On 2/9/2020 10:17 PM, smitra wrote:
On 09-02-2020 19:16, 'Brent Meeker' via Everything List wrote:
On 2/9/2020 12:48 AM, smitra wrote:
On 08-02-2020 07:00, Bruce Kellett wrote:
On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:

On 08-02-2020 05:19, Bruce Kellett wrote:

No, I am suggesting that Many-worlds is a failed theory, unable to
account for everyday experience. A stochastic single-world theory
is
perfectly able to account for what we see.

Bruce

Stochastic single word theories make predictions that violate those
of
quantum mechanics.

No they don't. When have violations of the quantum predictions been
observed?

A single world theory must violate unitary time evolution, it has to assume a violation of the Schrodinger equation. But there is no experimental evidence for violations of the Schrodinger equation.

Except for every measurement ever made of a quantum variable.

This os also explained by unitary time evolution as there observed system is not an isolated system.

Which just pushes the problem off to another place.



Brent

While one can make such assumptions and develop a formalism based on this, the issue is then that in the absence of experimental proof that the Schrodinger equation is going to be violated, one should not claim that such a model is superior than another model that doesn't imply any new physics.

The MWI may have some philosophical weaknesses like the derivation of the Born rule but the pragmatic variant of it where you just assume the Born rule is clearly superior to any other model where you're going to just assume that the known laws of physics are going to be violated to get to a model that to you looks more desirable from a philosophical point of view.


If the MWI (in the general sense of there existing a
multiverse rather than any details of how to derive the Born rule)
is
not correct, then that's hard to reconcile with known experimental
results.

All experimental results to date are consistent with a single-world
theory. There are several possibilities for such a theory, but to
date, experiment does not distinguish between them.

Single world theories require a violation of unitary time evolution of a perfectly isolated system. No experiment has ever observed this.

Because a perfectly isolated system can't be observed.

Observers interact locally with the observed system, so nothing would change if the observed system plus observer were located inside a giant isolated system. So, whatever observation is cannot funbdamentally depend on the system not being perfectly isolated.

That's funny, since  20 lines above you invoked the lack of isolation to explain measurements.

Brent




New physics that so far has never been observed needs to be
assumed just to get rid of the Many Worlds. Also, this new physics
should appear not at the as of yet unprobed high energies where the
known laws of physics could plausibly break down, instead it would
have
to appear at the mesoscopic or macroscopic scale where the laws of
physics are essentially fixed.

Bohm's theory does not require as-yet-unobserved new physics. GRW do
postulate a new physical interaction, but that is below the level of
current experimental detectability.

Bohm theory is not equivalent to QM, it only becomes equivalent to QM if one imposes a condition known as "quantum equilibrium". In general, Bohm theory in a condition of quantum non-equilibrium leads to violations of the Born rule. See here for details:

https://en.wikipedia.org/wiki/Quantum_non-equilibrium

Then without any experimental evidence for the additional features of Bohm theory such as the signatures of quantum non-equilibrium, why would be prefer it over and above a theory that doesn't make such assumptions? One would have to have very strong theoretical objections against the theory. In case of the Standard Model one can predict that it will break down at very high energies. But I don't see why the MWI in the pragmatic sense where one assumes the Born rule is so bad that it merits considering alternative theories, particularly if those alternative theories make lots of unverified assumptions about new physics in domains where new physics is thought to be unlikely to appear.

Besides, why should you assume that the Schrodinger equation is the
ultimate physical law?

It may be false, but absent experimental evidence that it is indeed false, theories that imply that it's false shouldn't get the benefit of the doubt just because they imply a single world.

Even though a single world is a well confirmed and often repeated
empirical observation?

It's not confirmed and repeated. One has to do an experiment that can distinguish between the alternative theories. Unitary time evolution is easily falsifiable. What's wrong is to claim that an experiment that on its own would be consistent with collapse is somehow evidence for collapse if it is also consistent with unitary time evolution when unitary time evolution and not collapse theories are consistent with the totality of all the experimental results.

Collapse theories are also consistent with fairies erasing the other possible results.  Unless MWI can produce a result inconsistent with SWI then it is just adding otiose unobservables.  There are infinitely many ways to add things that can't be observed.  The promise of MWI was that everything would be explained by the Schroedinger equation alone.  But it turns out that you still need to postulate the Born rule and then you still need to some yet unexplained axiom to say when a measurement has occurred and the Born rule can be applied.  In other words MWI didn't deliver on its promise...although it led to some interesting research.

Brent

Bruce Kellett

unread,
Feb 10, 2020, 2:18:02 AM2/10/20
to everyth...@googlegroups.com
That would not be the way most physicists would see it. They take Everettian QM as basic. Unfortunately, Everettian QM has hit a catastrophic train wreck -- it is clearly not viable as an understanding of quantum physics. The reason for this is a clear corollary of Kent's argument. Simply put, Everett takes the Schrodinger equation as basic. Acting on a general quantum state with the Schrodinger equation gives the relative states, and there can only ever be one relative state for each term in the expansion in terms of some set of basis states. The amplitudes of interest are the coefficients in this expansion. However, these coefficients or amplitudes, are just ordinary complex numbers, so are completely transparent to the SE. The set of sequences of outcomes of repeated trials (measurements on replications of the initial state) is then all n^N sequences of outcomes (labelled by 0 - n-1 for the n possible outcomes for N trials). This set of sequences is independent of the amplitudes in the original expansion of the state of interest in terms of the set of basis states. Consequently, the data one obtains from this set of experiments is one of the set of possible sequences of the integers 0 to n-1, is completely independent of the amplitudes in the original expansion. One can, therefore, gain no information about these amplitudes from the set of N trials. The Born rule is irrelevant, because the data are necessarily independent of the coefficients/amplitude.

This proves that Everett's approach from the SE, where there is only one branch for each possible outcome in a single trial, cannot account for the way in which experimental results are used in practice. Given Everett, experiments cannot reveal anything at all about the original state. So Everett fails as a scientific theory. End of story. Period. Nothing more to be said.

 
The main issue is unitary time evolution. This is a rather
unambiguous thing that one can check in experiments. A breakdown of
unitary time evolution has never been observed.

As Brent has pointed out, unitary evolution breaks down every time we observe a particular result for a measurement (to say nothing of black holes). Your focus on unitary evolution is misplaced -- it is not universally observed.

Many-worlds theory might be salvageable from the train wreck of Everett, but it is not clear how. It seems to be widely assumed that there is more than one branch for each basis state, even though that is not what Everett or the SE say. It is not clear how this could ever happen in a principled way: it certainly is not consistent with unitary evolution via the Schrodinger equation. There may be a way out of this, but none has been offered to date, and I would not hold out many prospects for success in such venture.

Bruce

Philip Thrift

unread,
Feb 10, 2020, 5:10:48 AM2/10/20
to Everything List


On Sunday, February 9, 2020 at 12:20:17 PM UTC-6, Brent wrote:

I think Bruno's hope is to recover the Schroedinger equation as a kind of stat-mech limit of his universal dovetailer threads.  This might comport with Zurek's idea of quantum Darwinism.

Brent



Bruno's 'dovetailer' seems more like Seth Lloyd's  'sum over computations': 


An appealing choice of quantum computation is one which consists of a coherent superposition of all possible quantum computations, as in the case of a quantum Turing machine whose input tape is in a uniform superposition of all possible programs. Such a ‘sum over computations’ encompasses both regular and random architectures within its superposition, and weighs computations according to the length of the program to which they correspond: algorithmically simple computations that arise from short programs have higher weight. The observational consequences of this and other candidate computations will be the subject of future work.



@philipthrift

Alan Grayson

unread,
Feb 10, 2020, 6:34:59 AM2/10/20
to Everything List


On Sunday, February 9, 2020 at 11:16:33 AM UTC-7, Brent wrote:


On 2/9/2020 12:48 AM, smitra wrote:
> On 08-02-2020 07:00, Bruce Kellett wrote:
>> On Sat, Feb 8, 2020 at 4:21 PM smitra <smi...@zonnet.nl> wrote:
>>
>>> On 08-02-2020 05:19, Bruce Kellett wrote:
>>>
>>>> No, I am suggesting that Many-worlds is a failed theory, unable to
>>>> account for everyday experience. A stochastic single-world theory
>>> is
>>>> perfectly able to account for what we see.
>>>>
>>>> Bruce
>>>
>>> Stochastic single word theories make predictions that violate those
>>> of
>>> quantum mechanics.
>>
>> No they don't. When have violations of the quantum predictions been
>> observed?
>
> A single world theory must violate unitary time evolution, it has to
> assume a violation of the Schrodinger equation. But there is no
> experimental evidence for violations of the Schrodinger equation.

Except for every measurement ever made of a quantum variable.

Brent

But doesn't decoherence theory, which I recall you like, use unitary time evolution in an attempt to solve the measurement problem? Or did I misread you? AG  

scerir

unread,
Feb 10, 2020, 8:35:44 AM2/10/20
to everyth...@googlegroups.com

Physics and the Totalitarian Principle

(Submitted on 10 Jul 2019)
What is sometimes called the "totalitarian principle," a metaphysical doctrine often associated with the famous physicist Murray Gell-Mann, states that everything allowed by the laws of nature must actually exist. The principle is closely related to the much older "principle of plenitude." Although versions of the totalitarian principle are well known to physicists and often appear in the physics literature, it has attracted little reflection. Apart from a critical examination of the origin and history of the totalitarian principle, the paper discusses this and the roughly similar plenitude principle from a conceptual perspective. In addition it offers historical analyses of a few case studies from modern physics in which reasoning based on the totalitarian principle can be identified. The cases include the prediction of the magnetic monopole, the hypothesis of radioactive protons, and the discovery of the muon neutrino. Moreover, attention is called to the new study of metamaterials.

"Feynman later commented on his path integral approach to quantum mechanics as follows (Feynman, Leighton and Sands 1966, p. 19-9):
Is it true that the particle doesn’t just “take the right path” but that it looks at all the other possible trajectories? … The miracle of it all is, of course, that it does just that. That’s what the laws of quantum mechanics say. [The principle of least action] isn’t that a particle takes the path of least action, but that it smells all the paths in the neighborhood and chooses the one that has the least action.

Alan Grayson

unread,
Feb 10, 2020, 1:56:22 PM2/10/20
to Everything List
IMO, there must be a deeper principle, as yet undiscovered, to explain the principle of least action. BTW, your page reference makes no sense. AG 

Philip Thrift

unread,
Feb 10, 2020, 2:18:17 PM2/10/20
to Everything List

Bruno Marchal

unread,
Feb 11, 2020, 7:16:39 AM2/11/20
to everyth...@googlegroups.com

On 7 Feb 2020, at 12:07, Bruce Kellett <bhkel...@gmail.com> wrote:

On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

Each individual sequence of head/tail would also occur with probability, in the corresponding WM scenario, and in the coin tossing experience.

In the MWI, what you describe is what has motivated the introduction of a frequency operator, and that is the right thing to do in QM. I think you might confuse the first person and the third person points of view, in the WM-scenario and in the MWI (which is coherent with your non-mechanist stance).

Bruno




Bruce

 
An observer might see their branch as having sufficient randomness to be a Bayesian prior, but to derive a full theory these outlier branches with the appearance of structure have to be eliminated. This is not a devastating blow to MWI, but it is a limitation on its explanatory power. Of course with statistical physics we have these logarithms and the rest and such slop tends to be "washed out" for large enough sample space. 

No matter how hard we try it is tough to make this all epistemic, say Bayesian etc, or ontological with frequentist statistics. 

LC 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Feb 11, 2020, 7:26:10 AM2/11/20
to everyth...@googlegroups.com

> On 7 Feb 2020, at 18:09, Lawrence Crowell <goldenfield...@gmail.com> wrote:
>
> MWI is not that bad. All quantum interpretations have some negative qualities. I think all quantum interpretations are auxiliary postulates not provable in QM.


I think, with some other, that SWE = MWI. That is so true that the founders have added the reduction of the wave postulate to avoid the MWI. I would say that the MWI is a theorem of QM, and that by adding the wave collapse, the theory theory is either inconsistent, or incomplete, but no working completion has ever been able to really suppress the superposition (aka many histories/worlds/dreams).

Then, it is a (not completely trivial) theorem in arithmetic that all computations exist and have a complex relative measure with each other. I giot the MWI “interpretation" of arithmetic well before I realise that the physicists were already there. All computations with oracles exist in the internal limit if the first person views associated with any universal number in arithmetic.
(I recall that a number u is universal if phi_u(<x, y>) = phi_x(y). I say that u emulates x on y. Thanks to Kleene’s predicate, this can be translated in the language of arithmetic (classical logic + the symbols s, 0, + and *), and the existence of computations is satisfied in all models of Robinson Arithmetic (a very weak arithmetic accepted even by the ultra-finitists).

Bruno



>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/3cd723e6-fc4a-42f2-813d-ceb1302852ac%40googlegroups.com.

Bruno Marchal

unread,
Feb 11, 2020, 7:41:56 AM2/11/20
to everyth...@googlegroups.com

On 7 Feb 2020, at 20:45, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 2/7/2020 3:07 AM, Bruce Kellett wrote:
On Fri, Feb 7, 2020 at 9:54 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
On Thursday, February 6, 2020 at 10:59:27 PM UTC-6, Bruce wrote:


This argument from Kent completely destroys Everett's attempt to derive the Born rule from his many-worlds approach to quantum mechanics. In fact, it totally undermines most attempts to derive the Born rule from any branching theory, and undermines attempts to justify ignoring branches on which the Born rule weights are disconfirmed. In the many-worlds case, recall, all observers are aware that other observers with other data must exist, but each is led to construct a spurious measure of importance that favours their own observations against the others', and  this leads to an obvious absurdity. In the one-world case, observers treat what actually happened as important, and ignore what didn't happen: this doesn't lead to the same difficulty.

Bruce


This appears to argue that observers in a branch are limited in their ability to take the results of their branch as a Bayesian prior. This limitation occurs for the coin flip case where some combinations have a high degree of structure. Say all heads or a repeated sequence of heads and tails with some structure, or apparent structure. For large N though these are a diminishing measure.

I don't think you have fully come to terms with Kent's argument. How do you determine the measure on the observed outcomes? The argument that such 'outlier' sequences are of small measure fails at the first hurdle, because all sequences have equal measure -- all are equally likely. In fact, all occur with unit probability in MWI.

In practice one doesn't look for a measure on specific outcomes sequences because you're testing a theory that only predicts one probability.  You flip coins to test whether P(heads)=0.5 which you can confirm or refute without even knowing the sequences.  It might be that every sequence you get by flipping is in the form HTHTHTHTHTHTHT... which would support P(H)=0.5.  It would be a different world than ours, possibly with different physics; but that would be a matter of  testing a different theory.

One of the problems with MWI is that can't seem to explain probability without sneaking in some equivalent concept. The obvious version of MWI would be branch counting in which every measurement-like event produces an enormous number of branches and the number of branches with spin UP relative to the number with spin DOWN gives the odds of spin UP.  A meta-physical difficulty is the all the spin UP branches are identical and so by Leibniz's identity of indiscernibles are really only one; but maybe this inapplicable since the measure involves lots of environment that would make it discernible.


With Mechanism, and apparently with Everett QM, we are multiplied by everything we don’t depend on. The reason why “the particles go through both silts” is that your mind state does not depend on which slit the particle go through. Well, it is actually a bit more complex, but it is the general idea. In fact, I got recently some argument that we might need the large cardinals, even the very large one (like the cardinal of Woodin or of Laver) in this quest for the measure. In fact the first person requires the full sigma_1(a) truth, that the sigma_1 in the oracle a, for all oracles. That provides the ability to use string axioms in set theory to get a measure space, but that is required intuitively also, with the step 7 of the UDA, actually. 

Bruno 




Brent


Bruce

 
An observer might see their branch as having sufficient randomness to be a Bayesian prior, but to derive a full theory these outlier branches with the appearance of structure have to be eliminated. This is not a devastating blow to MWI, but it is a limitation on its explanatory power. Of course with statistical physics we have these logarithms and the rest and such slop tends to be "washed out" for large enough sample space. 

No matter how hard we try it is tough to make this all epistemic, say Bayesian etc, or ontological with frequentist statistics. 

LC 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Feb 11, 2020, 7:47:31 AM2/11/20
to everyth...@googlegroups.com

On 7 Feb 2020, at 21:27, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 2/7/2020 7:47 AM, Philip Thrift wrote:


On Friday, February 7, 2020 at 5:59:39 AM UTC-6, Lawrence Crowell wrote:

I don't think MWI is that much worse than other interpretations. In fact I tend to see it as better than most. 

LC

 


It is sad (to me) to think that 100 years from now there will be any MWI adherents - except as some curious  cult.  

Sean Carroll promotes on his Twiiter (I follow him just to see what nutty thing he says) that he looks forward to the day where all physicists are Mad-Dog Everettians.

Mad-Dog Everettianismhttps://arxiv.org/abs/1801.08132

It is not only a rabbit hole, it is a cult that has taken over physicists (a lot of them anyway).

It's not only MWI, it's also the infinite universe where there are infinitely many copies of you and where everything happens.  And the multiverse where all possible (mathematically consistent?) universes exist. 

That can be shown to be inconsistent, unless some precautions are taken, but then we will miss some mathematical structures.



We need a way to think about these "infinities".  Are they meaningful?  What would it mean to get rid of them and theorize that everything is finite?  Are there some intermediate options?  Where are the meta-physicists when you need them?

Mechanism do answer this: all computations exists (indeed, in all models of arithmetic), and that generate an internal phenomenology which cannot be bound in any mathematical way. We can say that it is beyond the supercargo infinities studied today, and there are some recent evidences that we might need the cardinal of Laver, which are very close to the "Kunen bound", above which cardinals can no more be extended (at least if we keep the axiom of choice) without leading to inconsistency. 

Bruno




Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 11, 2020, 7:59:52 AM2/11/20
to everyth...@googlegroups.com


On Fri, Feb 7, 2020 at 7:16 PM Bruce Kellett <bhkel...@gmail.com> wrote:

> Many-worlds is incompatible with the Born rule


John K Clark
It is loading more messages.
0 new messages