Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Why even physicists still don’t understand quantum theory 100 years on

531 views
Skip to first unread message

John Clark

unread,
Feb 4, 2025, 8:26:35 AMFeb 4
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
In the February 3, 2025 issue of the journal Nature Sean Carroll wrote this article. Here are a few interesting quotes: 

"In the Everettian, or many-worlds, interpretation, introduced by Hugh Everett, observers become entangled with the systems they measure, and every allowed outcome is realized in separate branches of the wavefunction, which are interpreted as parallel worlds."

"As far as anyone knows, there is no experiment that could distinguish between pilot-wave and Everettian approaches. (Advocates of each tend to argue that the other is simply ill defined.) So, physicists don’t agree on what precisely a measurement is, whether wavefunctions represent physical reality, whether there are physical variables in addition to the wavefunction or whether the wavefunction always obeys the Schrödinger equation."



John K Clark    See what's on my new list at  Extropolis
4c1


Cosmin Visan

unread,
Feb 4, 2025, 10:03:15 AMFeb 4
to Everything List
Because they hate consciousness with all their being.

spudb...@aol.com

unread,
Feb 4, 2025, 12:56:30 PMFeb 4
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
Bohmian mechanics v Everett-DeWiit-Wheeler? 
For Carroll, it probably means they're the same. Indistinguishable. 
Ok, scratch that off the physics bucket list. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv0sjJLK%3D1Pj4wPXET8YJaYRLSoHRwOqfs7ha1fviLYHhw%40mail.gmail.com.

Alan Grayson

unread,
Feb 4, 2025, 1:46:09 PMFeb 4
to Everything List
On Tuesday, February 4, 2025 at 10:56:30 AM UTC-7 spudb...@aol.com wrote:
Bohmian mechanics v Everett-DeWiit-Wheeler? 
For Carroll, it probably means they're the same. Indistinguishable. 
Ok, scratch that off the physics bucket list. 

I don't get your point, or Carroll's for that matter. I really don't see the interpretations as equivalent. They don't seem remotely in the same ballpark. AG 

John Clark

unread,
Feb 4, 2025, 2:03:15 PMFeb 4
to everyth...@googlegroups.com, extro...@googlegroups.com

On Tue, Feb 4, 2025 at 12:56 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

Bohmian mechanics v Everett-DeWiit-Wheeler? 
For Carroll, it probably means they're the same. Indistinguishable. 

This is what I said about that about a month ago: 


Pilot Wave Theory keeps Schrodinger's Equation but needs to add another entirely new very complicated equation called the Pilot Wave Equation that contains non-local variables. When an electron enters the two slit experiment the Pilot Wave in effect produces a little arrow pointing to one of the electrons with the caption under it saying "this is the real electron, ignore all the other ones".  The Pilot Wave does absolutely nothing except erase unwanted universes, it is for this reason that some have called Pilot Wave theory the Many Worlds theory in denial. 

The Pilot Wave is unique in another way, it can affect matter but matter cannot affect it, if it's real it would be the first time in the history of physics where an exception to Newton's credo that for every action there is a reaction;  even after the object it is pointing to is destroyed the pilot wave continues on, although now it is pointing at nothing and has no further effect on anything in the universe. Also, nobody has ever been able to make a relativistic version of the Pilot Wave Equation.Paul dirac found a version of Schrodinger's Equation that was compatible with special relativity as early as 1927. 

John K Clark    See what's on my new list at  Extropolis
8b0
 

Quentin Anciaux

unread,
Feb 4, 2025, 2:38:37 PMFeb 4
to everyth...@googlegroups.com, extro...@googlegroups.com
The fundamental absurdity of single-history frameworks becomes clear when we consider the reliance on theoretical constructs that, by definition, never exist and never will. How can one justify using mathematical tools that invoke nonexistent possibilities to explain a reality where only one sequence of events is ever realized? If something never existed, has no causal influence, and will never exist in any possible future, how does it play any role in explaining what does exist?

This contradiction is evident in interpretations like Bohmian mechanics, where the pilot wave guides particles but remains completely unobservable and uninteractive beyond that role. It’s an invisible, untouchable entity that affects matter but is never affected in return—something that is functionally indistinguishable from the pure abstractions of probability waves in a single-world interpretation. In both cases, explanations rely on constructs that have no true existence beyond their mathematical form.

A single-history universe that leans on unrealized possibilities to justify probability is making an implicit appeal to something that doesn’t and will never exist. It treats the wavefunction as a real tool for calculating outcomes while simultaneously denying that the alternatives it describes have any grounding in reality. This is the absurdity: how can something that never existed be part of an explanation for what does?

In contrast, in a many-worlds framework, all possibilities exist and are real branches of the wavefunction, providing an actual basis for probability. The probabilities are not just mathematical conveniences; they describe distributions of real outcomes across real histories. This removes the need for metaphysical hand-waving about non-existent possibilities influencing reality.

If physics is about describing reality, then relying on things that are, by construction, eternally non-existent to justify observed phenomena is conceptually incoherent. It is an attempt to have it both ways—to use abstract possibilities when convenient while denying their reality when inconvenient. That contradiction is why single-history frameworks ultimately fail to provide a satisfying foundation for probability and existence itself.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 4, 2025, 6:22:56 PMFeb 4
to everyth...@googlegroups.com



On 2/4/2025 11:38 AM, Quentin Anciaux wrote:
The fundamental absurdity of single-history frameworks becomes clear when we consider the reliance on theoretical constructs that, by definition, never exist and never will. How can one justify using mathematical tools that invoke nonexistent possibilities to explain a reality where only one sequence of events is ever realized? If something never existed, has no causal influence, and will never exist in any possible future, how does it play any role in explaining what does exist?

This contradiction is evident in interpretations like Bohmian mechanics, where the pilot wave guides particles but remains completely unobservable and uninteractive beyond that role. It’s an invisible, untouchable entity that affects matter but is never affected in return—something that is functionally indistinguishable from the pure abstractions of probability waves in a single-world interpretation. In both cases, explanations rely on constructs that have no true existence beyond their mathematical form.

A single-history universe that leans on unrealized possibilities to justify probability
"Justify"??  Unrealized possibilities are what probabilities quantify.  If all possibilities were realized the wouldn't have probabilities assigned to them...exactly the problem that arises in MWI.


is making an implicit appeal to something that doesn’t and will never exist. It treats the wavefunction as a real tool for calculating outcomes while simultaneously denying that the alternatives it describes have any grounding in reality. This is the absurdity: how can something that never existed be part of an explanation for what does?
That is just a lot of emotive talk.  All the alternatives have a "grounding in reality"; that's what makes the possibilities with definite probabilities.

Brent


In contrast, in a many-worlds framework, all possibilities exist and are real branches of the wavefunction, providing an actual basis for probability. The probabilities are not just mathematical conveniences; they describe distributions of real outcomes across real histories. This removes the need for metaphysical hand-waving about non-existent possibilities influencing reality.

If physics is about describing reality, then relying on things that are, by construction, eternally non-existent to justify observed phenomena is conceptually incoherent. It is an attempt to have it both ways—to use abstract possibilities when convenient while denying their reality when inconvenient. That contradiction is why single-history frameworks ultimately fail to provide a satisfying foundation for probability and existence itself.

Quentin 

Le mar. 4 févr. 2025, 19:03, John Clark <johnk...@gmail.com> a écrit :

On Tue, Feb 4, 2025 at 12:56 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

Bohmian mechanics v Everett-DeWiit-Wheeler? 
For Carroll, it probably means they're the same. Indistinguishable. 

This is what I said about that about a month ago: 


Pilot Wave Theory keeps Schrodinger's Equation but needs to add another entirely new very complicated equation called the Pilot Wave Equation that contains non-local variables. When an electron enters the two slit experiment the Pilot Wave in effect produces a little arrow pointing to one of the electrons with the caption under it saying "this is the real electron, ignore all the other ones".  The Pilot Wave does absolutely nothing except erase unwanted universes, it is for this reason that some have called Pilot Wave theory the Many Worlds theory in denial. 

The Pilot Wave is unique in another way, it can affect matter but matter cannot affect it, if it's real it would be the first time in the history of physics where an exception to Newton's credo that for every action there is a reaction;  even after the object it is pointing to is destroyed the pilot wave continues on, although now it is pointing at nothing and has no further effect on anything in the universe. Also, nobody has ever been able to make a relativistic version of the Pilot Wave Equation.Paul dirac found a version of Schrodinger's Equation that was compatible with special relativity as early as 1927. 

John K Clark    See what's on my new list at  Extropolis
8b0
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv3JL9f40jD-4qG0ry6z38ZtVysrh9RhE%2BDirJrSWzaX-w%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 4, 2025, 11:11:11 PMFeb 4
to everyth...@googlegroups.com
 "...the probability of observing one particle to be somewhere can depend on where we observe another particle to be, and this remains true no matter how far apart they are."
    I think the "no matter how far apart they are" is almost certainly untrue.  Over some distances there must be interaction with spacetime itself, i.e. the metric field, that destroys the entanglement.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 5, 2025, 2:39:10 AMFeb 5
to everyth...@googlegroups.com
Brent,

You say that unrealized possibilities are what probabilities quantify, but in a single-history framework, those possibilities never had any existence beyond the formalism. If only one history is real, then all other possibilities were never actually possible in any meaningful way—they were never real candidates for realization, just mathematical constructs. That’s not an emotive argument; it’s pointing out that the entire notion of probability in such a framework is detached from anything real.

If probability is supposed to quantify real possibilities, then in a world where only one history exists for all eternity, what exactly is being quantified? If an event with a calculated probability of 50% never happens in this one history, then its true probability was always 0%. Your framework claims to allow for multiple possibilities, but in practice, it only ever realizes one, making the rest nothing more than empty labels.

And you assert that alternatives have a "grounding in reality"—but what does that mean in a framework where they never actually happen? If they had a genuine grounding, they would have to be part of reality in some form, even if only probabilistically. But in a single-history framework, that never happens. The probabilities exist only in the mind of the observer, with no external ontological reality. They are tools that describe nothing but a retrospective justification of what already happened.

The supposed "problem" in MWI—that all possibilities are realized—actually solves this issue. It gives probabilities a real basis in the structure of the universe rather than treating them as abstract bookkeeping. The probabilities describe real distributions across real histories rather than referring to things that were never real to begin with.

The single-world view wants to use probability while simultaneously denying the existence of the things probability refers to. That’s not just emotive talk—it’s a contradiction at the foundation of the framework.

Quentin 

Bruce Kellett

unread,
Feb 5, 2025, 3:04:57 AMFeb 5
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 6:39 PM Quentin Anciaux <allc...@gmail.com> wrote:
Brent,

You say that unrealized possibilities are what probabilities quantify, but in a single-history framework, those possibilities never had any existence beyond the formalism. If only one history is real, then all other possibilities were never actually possible in any meaningful way—they were never real candidates for realization, just mathematical constructs. That’s not an emotive argument; it’s pointing out that the entire notion of probability in such a framework is detached from anything real.

If probability is supposed to quantify real possibilities, then in a world where only one history exists for all eternity, what exactly is being quantified? If an event with a calculated probability of 50% never happens in this one history, then its true probability was always 0%. Your framework claims to allow for multiple possibilities, but in practice, it only ever realizes one, making the rest nothing more than empty labels.

And you assert that alternatives have a "grounding in reality"—but what does that mean in a framework where they never actually happen? If they had a genuine grounding, they would have to be part of reality in some form, even if only probabilistically. But in a single-history framework, that never happens. The probabilities exist only in the mind of the observer, with no external ontological reality. They are tools that describe nothing but a retrospective justification of what already happened.

The supposed "problem" in MWI—that all possibilities are realized—actually solves this issue. It gives probabilities a real basis in the structure of the universe rather than treating them as abstract bookkeeping. The probabilities describe real distributions across real histories rather than referring to things that were never real to begin with.

The single-world view wants to use probability while simultaneously denying the existence of the things probability refers to. That’s not just emotive talk—it’s a contradiction at the foundation of the framework.

Quentin

Have you ever heard of repeated experiments?

Bruce

Quentin Anciaux

unread,
Feb 5, 2025, 3:53:19 AMFeb 5
to everyth...@googlegroups.com
Bruce,

Repeated experiments don’t change the core issue. Even if you perform an experiment a trillion times, in a single-history universe, there is still only one realized sequence of outcomes. That means certain possibilities with greater than zero probability will simply never happen—not just in a given run, but ever.

If the universe has a unique history and that history unfolds in a specific way, then all the unrealized possibilities are not just unrealized—they were never part of reality in any way. They had no causal link to what happened, no mechanism by which they could have happened, and no effect on the realized sequence of events.

So what does it mean to say an event had a 10% probability if, across all of history, it never occurs? In a framework where only one history is real, probability becomes a misleading abstraction—it suggests possibilities that were never truly possible. The math remains consistent, but it describes nothing but an idealized concept detached from what actually exists.

Contrast this with a framework where all possibilities are realized: probabilities describe distributions of real events across real histories. The meaning of probability is preserved because it refers to actual occurrences, not hypothetical ones that were never part of reality to begin with.

In a single-history framework, probability becomes a story we tell ourselves about things that never were and never could be. That’s the absurdity.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 5, 2025, 5:48:36 AMFeb 5
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 7:53 PM Quentin Anciaux <allc...@gmail.com> wrote:

Repeated experiments don’t change the core issue. Even if you perform an experiment a trillion times, in a single-history universe, there is still only one realized sequence of outcomes. That means certain possibilities with greater than zero probability will simply never happen—not just in a given run, but ever.

In a long series of repeated experiments all possible outcomes are realized a number of times that is reflective of their theoretical probabilities. The sequence of N  repetitions gives you an estimate of the probability distribution. If every outcome is realized on every trial, you do not get a unique insight into the underlying probability distribution, because all possible sequences are realized, and this tells you nothing about the true probabilities. Besides, there is no interaction between branches and there is no causal link between outcomes. despite what you claim to the contrary.

Bruce

Quentin Anciaux

unread,
Feb 5, 2025, 5:55:11 AMFeb 5
to everyth...@googlegroups.com
Bruce,

That still doesn't address the core issue. If the universe has a unique history and a finite existence, then there is a fundamental limit to the number of repetitions that can ever occur. There is no guarantee that all possible outcomes will ever be realized, no matter how large N is. Some events with nonzero probability simply will never happen. That alone is enough to undermine frequentism in a single-history framework—it relies on the assumption that probabilities reflect long-run frequencies, but if the history is finite and unique, the necessary "long run" does not exist.

Even in an infinite universe, if history is still unique, there is no mechanism ensuring that all outcomes occur in proportions that match their theoretical probabilities. Some possibilities with nonzero probability may remain unrealized forever, making their assigned probabilities meaningless in any real sense. They were never actual possibilities in the first place—just theoretical artifacts with no impact on reality.

Your argument assumes that probabilities describe reality in the single-world framework, but without an ensemble where all possibilities exist in some way, this assumption collapses. Probabilities become detached from what actually happens and instead become abstract formalism with no grounding in the real world. That’s the problem: the single-world view wants to use probability theory as if all possibilities have meaning while simultaneously denying that they do.

In contrast, in a framework where all possibilities are realized in different branches, probability retains its explanatory power. It describes actual distributions of outcomes rather than pretending that unrealized events still somehow "exist" in a purely mathematical sense. If the universe is unique, and history is unique, then probability has no true foundation—it’s just a game with numbers, untethered from what actually happens.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 5, 2025, 8:28:40 AMFeb 5
to everyth...@googlegroups.com
On Tue, Feb 4, 2025 at 6:22 PM Brent Meeker <meeke...@gmail.com> wrote:

If all possibilities were realized the wouldn't have probabilities assigned to them...exactly the problem that arises in MWI.

You've forgotten that it's not just an electron that is a quantum object and thus part of the Universal Wave Function (UWF),  you are also part of the UWF. There are an astronomical number of branches of the UWF, perhaps an infinite number, and those branches do not interact with each other and thus can be interpreted as separate "worlds". You the observer are stuck in just one of those branches and thus lack sufficient information to know if you are in the branch where the cat is alive or the branch where the cat is dead, you need to open the box and look in to get that information, before that you do what you always do when you don't have enough information to be certain, you work with probabilities.   

The quantum bomb tester demonstrates it is possible to obtain information about an object without interacting with it in any way, the bomb does explode in some branches (a.k.a. worlds) of the UWF but if you set things up properly you the observer will be in a branch where the bomb did NOT explode and yet you know for certain the bomb is working properly and will explode if it detects even one photon. Many Worlds can easily explain how interaction free measurement could work, and do so without invoking some sort of ill defined wave function collapse, by simply acknowledging that all outcomes occur each in its own independent branch of the UWF. But the competitors of Many Worlds struggle to give an intuitive explanation of how interaction free measurement could possibly work. And this is important! 

Years ago in high school physics I was taught a derivation of Heisenberg's Uncertainty Principle that started from the assumption that you'd have to use photons to detect something and that would always disturb what you're looking at, but I now know that derivation was invalid; it got the right answer but for the wrong reason. The real reason is due to the mathematical structure of quantum mechanics, the uncertainty principle is derived from the non-commuting nature of observable operators, like position and momentum, or energy and time.

I've also heard the “using photons to detect something disturbs it” argument to explain why Maxwell's Demon does not violate the Second Law Of Thermodynamics, but that argument is also invalid, the true answer is that unless the demon's brain had infinite memory storage at some point it would have to erase information, and that would require energy and increase entropy.

John K Clark    See what's on my new list at  Extropolis
3nk

John Clark

unread,
Feb 5, 2025, 8:54:26 AMFeb 5
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 5:48 AM Bruce Kellett <bhkel...@gmail.com> wrote:
 
The sequence of N  repetitions gives you an estimate of the probability distribution.

OK, but even if N=1, such as when I flip a coin just once, I can still obtain a probability of it coming up heads that makes sense. And the long-run relative frequency of an event occurring in repeated trials is just one way to define probability, you could also say it's the ratio of all possible favorable outcomes to all possible outcomes, or as a degree of belief based on available evidence that can be updated when new evidence becomes available.

 there is no interaction between branches and there is no causal link.

TrueAnd that is exactly why those independent branches of the Universal Wave Function can be thought of as independent worlds. 

 John K Clark    See what's on my new list at  Extropolis

ei6

Alan Grayson

unread,
Feb 5, 2025, 1:42:57 PMFeb 5
to Everything List
And why the MWI is unverifiable and tantamount to a fantasy. AG

Brent Meeker

unread,
Feb 5, 2025, 2:56:59 PMFeb 5
to everyth...@googlegroups.com



On 2/4/2025 11:38 PM, Quentin Anciaux wrote:
Brent,

You say that unrealized possibilities are what probabilities quantify, but in a single-history framework, those possibilities never had any existence beyond the formalism.
I don't know what "formalism" means in that context.  When you calculate probabilities of events in QM the events are not "formalisms".  They are implied by the same theories and mechanics that attributes possibility to the events that were observed.  And on other occasions they the events that happen.  So they are not mere formalism, their possibility and probability are as real as the possibility and probability of the observed events.


If only one history is real, then all other possibilities were never actually possible in any meaningful way—they were never real candidates for realization, just mathematical constructs. That’s not an emotive argument; it’s pointing out that the entire notion of probability in such a framework is detached from anything real.

If probability is supposed to quantify real possibilities, then in a world where only one history exists for all eternity, what exactly is being quantified? If an event with a calculated probability of 50% never happens in this one history, then its true probability was always 0%.
That's contrary to the meaning of probability.  You are assuming underlying determinism.  You seem to conceive of probability as always being 1 or 0, which is the same as denying the very concept of probability


Your framework claims to allow for multiple possibilities, but in practice, it only ever realizes one, making the rest nothing more than empty labels.
It's not "my framework", it's the theory of probability.  I think you are confused by the fact that probability theory has many applications.  You're stuck on the application to ignorance in a deterministic case.  But QM is not deterministic.  The probabilities don't refer just to ignorance.  Just because there is a single world doesn't make it a deterministic world.  In fact MWI has more trouble representing probabilities.


And you assert that alternatives have a "grounding in reality"—but what does that mean in a framework where they never actually happen?
It means that the same theory that predicted the thing that happened with probability 0.3, also predicted the thing that didn't happen with probability 0.6 and this theory has been verified by finding that in long strings of experiments the latter happens twice as often as the former. 

If they had a genuine grounding, they would have to be part of reality in some form, even if only probabilistically.
I'm telling you they are part of reality probabilistically.  What do you mean by that phrase, if not what I've been saying?


But in a single-history framework, that never happens. The probabilities exist only in the mind of the observer, with no external ontological reality. They are tools that describe nothing but a retrospective justification of what already happened.
Energy, moment, entropy, gravity...you could say that they are all just tools in the mind of the physicist with no external ontological reality.  They are just terms in our mathematics.


The supposed "problem" in MWI—that all possibilities are realized—actually solves this issue. It gives probabilities a real basis in the structure of the universe rather than treating them as abstract bookkeeping.
No, according to you they set all probabilities to 1.


The probabilities describe real distributions across real histories rather than referring to things that were never real to begin with.
MWI doesn't distribute across histories.  It asserts that all possibilities occur in each event "with probability 1".  That's why the assignment of probabilities is a problem for MWI.

Brent

Quentin Anciaux

unread,
Feb 5, 2025, 3:10:31 PMFeb 5
to everyth...@googlegroups.com
Brent,

You're arguing that probabilities in a single-world framework are as real as those of observed events because they are derived from the same equations. But if only one history ever happens, then unrealized possibilities are just numbers in a calculation, not something that ever had a chance of being real. The theory predicts probabilities, but what actually occurs is just one unique sequence of events. The rest—no matter how formally predicted—never existed in any form beyond the equations.

You claim that this does not imply determinism, but the fact remains that only one history ever unfolds. Whether the process is called "random" or not, in practical terms, there is no actual underlying ensemble of events—there is just the one sequence that reality plays out. That makes probability, in this framework, purely descriptive of an imagined set of possibilities that never had any ontological status.

You say that unrealized events are "part of reality probabilistically," but what does that even mean when they never actually happen? If an event is assigned a 60% probability but never occurs in the only history that exists, then in what sense was it ever a real possibility? It was just an abstract calculation with no actual link to reality. You keep referring to long strings of experiments as if an infinite series of trials is guaranteed to sample all possibilities—but in a finite universe with a unique history, that is simply not true.

Your attempt to equate probability with other concepts like energy or entropy fails because those are directly observable and quantifiable properties of physical systems. Probability, in a single-history framework, is not a property of the world—it’s a mental construct we impose on it. It’s not like energy or momentum; it’s a way of reasoning about things that will never actually exist.

MWI, on the other hand, does not set all probabilities to 1 arbitrarily. It gives probability a real foundation by making it about relative frequencies across real histories. There, probabilities describe distributions of actualized outcomes, not abstract unrealized ones. In contrast, in a single-world view, probability is just a way of pretending that things that never existed somehow mattered. That is the contradiction you keep glossing over.

Brent Meeker

unread,
Feb 5, 2025, 3:18:25 PMFeb 5
to everyth...@googlegroups.com
On 2/5/2025 2:54 AM, Quentin Anciaux wrote:
Bruce,

That still doesn't address the core issue. If the universe has a unique history and a finite existence, then there is a fundamental limit to the number of repetitions that can ever occur. There is no guarantee that all possible outcomes will ever be realized, no matter how large N is. Some events with nonzero probability simply will never happen. That alone is enough to undermine frequentism in a single-history framework—it relies on the assumption that probabilities reflect long-run frequencies, but if the history is finite and unique, the necessary "long run" does not exist.
I recommend that you never play cards for money.


Even in an infinite universe, if history is still unique, there is no mechanism ensuring that all outcomes occur in proportions that match their theoretical probabilities.
Yet they do match.   QM is the most accurate, predictive theory there is.


Some possibilities with nonzero probability may remain unrealized forever, making their assigned probabilities meaningless in any real sense. They were never actual possibilities in the first place—just theoretical artifacts with no impact on reality.

Your argument assumes that probabilities describe reality in the single-world framework, but without an ensemble where all possibilities exist in some way, this assumption collapses.
Where they all exist the probabilities (according to you) become 1, and "probability" is meaningless.  I think you are just confused because you don't distinguish between the theory of probability and it's several different applications.  You seem to think the world has to be only one certain way for it to apply.  Try reading the attached.

Brent
Ch1-Probability-Statistics.pdf

Quentin Anciaux

unread,
Feb 5, 2025, 3:36:55 PMFeb 5
to everyth...@googlegroups.com
Brent,

I went through the document you sent, and it outlines the different interpretations of probability: mathematical, physical symmetry, degree of belief, and empirical frequency. But none of these resolve the core issue in a single-history universe—where probability is supposed to describe "possibilities" that, in the end, never had any reality.

Your frequentist approach assumes that, given enough trials, outcomes will appear in proportions that match their theoretical probabilities. But in a finite, single-history universe, there is no guarantee that will ever happen. Some events with nonzero probability simply won’t occur—not because of statistical fluctuations, but because history only plays out one way. In that case, were those possibilities ever really possible? If something assigned a probability of 10% never happens in the actual course of the universe, then in what meaningful way was it ever a possibility?

You argue that if all possibilities are realized, probability loses its meaning. But in a single-history world, probability is just as meaningless because it describes outcomes that never had a chance of being real. If probability is supposed to quantify potential realities, then in a framework where only one reality exists, probability is nothing more than a retrospective justification—it has no actual explanatory power.

The math remains internally consistent, but it becomes an empty formalism, detached from anything real. The whole structure relies on pretending that unrealized events still "exist" in some abstract sense, even though they never affect reality. That’s the contradiction at the heart of the single-history view. It uses probability to describe possibilities while simultaneously denying that those possibilities ever had a chance to be real.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 5, 2025, 5:06:47 PMFeb 5
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 7:36 AM Quentin Anciaux <allc...@gmail.com> wrote:
Brent,

I went through the document you sent, and it outlines the different interpretations of probability: mathematical, physical symmetry, degree of belief, and empirical frequency. But none of these resolve the core issue in a single-history universe—where probability is supposed to describe "possibilities" that, in the end, never had any reality.

Your frequentist approach assumes that, given enough trials, outcomes will appear in proportions that match their theoretical probabilities. But in a finite, single-history universe, there is no guarantee that will ever happen. Some events with nonzero probability simply won’t occur—not because of statistical fluctuations, but because history only plays out one way. In that case, were those possibilities ever really possible? If something assigned a probability of 10% never happens in the actual course of the universe, then in what meaningful way was it ever a possibility?

You argue that if all possibilities are realized, probability loses its meaning. But in a single-history world, probability is just as meaningless because it describes outcomes that never had a chance of being real. If probability is supposed to quantify potential realities, then in a framework where only one reality exists, probability is nothing more than a retrospective justification—it has no actual explanatory power.

It is a shame that you think that quantum mechanics, with its reliance on probability calculations, has no actual explanatory power. That is contrary to the experience of quantum physicists for over close to 100 years. Good to see that being out on an impossible limb is still attractive to some people.....

Bruce

Quentin Anciaux

unread,
Feb 5, 2025, 5:16:04 PMFeb 5
to everyth...@googlegroups.com
Bruce,

Quantum mechanics has explanatory power because it provides accurate predictions and a framework for modeling reality. The problem isn’t with quantum mechanics itself—it’s with trying to reconcile probability with a single-history universe where only one sequence of events ever occurs.

In a framework where only one history unfolds, probability is purely descriptive—it does not explain why this history, rather than any other, is the one that exists. It assigns numbers to theoretical possibilities that never had a chance of being real. You keep asserting that probabilities are meaningful in a single-history view, but meaningful in what sense? If a certain event, despite being assigned a 30% probability, never happens in the one realized history, then in what sense was it ever a possibility?

In contrast, in a framework where all possibilities are realized, probability maintains a clear meaning: it describes the relative measure of outcomes across the full set of realized possibilities. In that case, probability is tied to something real, rather than just being a tool we use to pretend that nonexistent possibilities matter.

The fact that quantum mechanics works well does not mean that a single-history interpretation is logically coherent when it comes to probability. You’re conflating the success of QM with the philosophical implications of trying to force probability into a framework where unrealized possibilities never had any reality at all. That’s the problem you’re not addressing.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Message has been deleted

Alan Grayson

unread,
Feb 5, 2025, 5:29:10 PMFeb 5
to Everything List
On Wednesday, February 5, 2025 at 3:16:04 PM UTC-7 Quentin Anciaux wrote:
Bruce,

Quantum mechanics has explanatory power because it provides accurate predictions and a framework for modeling reality. The problem isn’t with quantum mechanics itself—it’s with trying to reconcile probability with a single-history universe where only one sequence of events ever occurs.

In a framework where only one history unfolds, probability is purely descriptive—it does not explain why this history, rather than any other, is the one that exists. It assigns numbers to theoretical possibilities that never had a chance of being real. You keep asserting that probabilities are meaningful in a single-history view, but meaningful in what sense? If a certain event, despite being assigned a 30% probability, never happens in the one realized history, then in what sense was it ever a possibility?

In contrast, in a framework where all possibilities are realized, probability maintains a clear meaning: it describes the relative measure of outcomes across the full set of realized possibilities. In that case, probability is tied to something real, rather than just being a tool we use to pretend that nonexistent possibilities matter.

The fact that quantum mechanics works well does not mean that a single-history interpretation is logically coherent when it comes to probability. You’re conflating the success of QM with the philosophical implications of trying to force probability into a framework where unrealized possibilities never had any reality at all. That’s the problem you’re not addressing.

 Why do you assume that some non-zero probabilities never occur? You have no way of knowing this even if it's true, Meanwhile, you prefer the MWI that can't be verified. Puzzling preferences. AG 

Quentin Anciaux

unread,
Feb 5, 2025, 5:36:11 PMFeb 5
to everyth...@googlegroups.com


Le mer. 5 févr. 2025, 22:25, Alan Grayson <agrays...@gmail.com> a écrit :


On Wednesday, February 5, 2025 at 1:36:55 PM UTC-7 Quentin Anciaux wrote:
Brent,

I went through the document you sent, and it outlines the different interpretations of probability: mathematical, physical symmetry, degree of belief, and empirical frequency. But none of these resolve the core issue in a single-history universe—where probability is supposed to describe "possibilities" that, in the end, never had any reality.

Your frequentist approach assumes that, given enough trials, outcomes will appear in proportions that match their theoretical probabilities. But in a finite, single-history universe, there is no guarantee that will ever happen. Some events with nonzero probability simply won’t occur—not because of statistical fluctuations, but because history only plays out one way. In that case, were those possibilities ever really possible? If something assigned a probability of 10% never happens in the actual course of the universe, then in what meaningful way was it ever a possibility?

You argue that if all possibilities are realized, probability loses its meaning. But in a single-history world, probability is just as meaningless because it describes outcomes that never had a chance of being real. If probability is supposed to quantify potential realities, then in a framework where only one reality exists, probability is nothing more than a retrospective justification—it has no actual explanatory power.

The math remains internally consistent, but it becomes an empty formalism, detached from anything real. The whole structure relies on pretending that unrealized events still "exist" in some abstract sense, even though they never affect reality. That’s the contradiction at the heart of the single-history view. It uses probability to describe possibilities while simultaneously denying that those possibilities ever had a chance to be real.

Why do you assume that some non-zero probabilities never occur? How could you know this? Meanwhile, you prefer a theory, MIi, that can't be verified. Puzzling preferences. AG 

AG,

I assume that some nonzero probabilities never occur because, in a single-history universe with finite time and a unique trajectory, there is no guarantee that every possible outcome will ever be realized. If history unfolds in only one way, then there will inevitably be events assigned nonzero probability that simply never happen. That’s not an assumption—it’s an unavoidable consequence of having only one realized history.

Meanwhile, you act as if probability distributions in a single-history universe retain meaning even when certain outcomes never manifest. But if an event with a 10% probability never happens in the actual history of the universe, then in what sense was that probability meaningful? The theory assigned a chance to something that was never a real possibility in the only existing history. That turns probability into a purely abstract tool with no ontological grounding—it describes things that were never going to happen anyway.

As for verification, the issue is not about choosing a theory that "can’t be verified." The problem is that the single-history view relies on unobservable, nonexistent possibilities to justify probability while simultaneously denying their existence. It wants the predictive power of probability theory but refuses to acknowledge the implications of what probability actually represents. That’s not just puzzling—it’s self-contradictory.

Le mer. 5 févr. 2025, 20:18, Brent Meeker <meeke...@gmail.com> a écrit :
On 2/5/2025 2:54 AM, Quentin Anciaux wrote:
Bruce,

That still doesn't address the core issue. If the universe has a unique history and a finite existence, then there is a fundamental limit to the number of repetitions that can ever occur. There is no guarantee that all possible outcomes will ever be realized, no matter how large N is. Some events with nonzero probability simply will never happen. That alone is enough to undermine frequentism in a single-history framework—it relies on the assumption that probabilities reflect long-run frequencies, but if the history is finite and unique, the necessary "long run" does not exist.
I recommend that you never play cards for money.

Even in an infinite universe, if history is still unique, there is no mechanism ensuring that all outcomes occur in proportions that match their theoretical probabilities.
Yet they do match.   QM is the most accurate, predictive theory there is.

Some possibilities with nonzero probability may remain unrealized forever, making their assigned probabilities meaningless in any real sense. They were never actual possibilities in the first place—just theoretical artifacts with no impact on reality.

Your argument assumes that probabilities describe reality in the single-world framework, but without an ensemble where all possibilities exist in some way, this assumption collapses.
Where they all exist the probabilities (according to you) become 1, and "probability" is meaningless.  I think you are just confused because you don't distinguish between the theory of probability and it's several different applications.  You seem to think the world has to be only one certain way for it to apply.  Try reading the attached.

Brent
Probabilities become detached from what actually happens and instead become abstract formalism with no grounding in the real world. That’s the problem: the single-world view wants to use probability theory as if all possibilities have meaning while simultaneously denying that they do.

In contrast, in a framework where all possibilities are realized in different branches, probability retains its explanatory power. It describes actual distributions of outcomes rather than pretending that unrealized events still somehow "exist" in a purely mathematical sense. If the universe is unique, and history is unique, then probability has no true foundation—it’s just a game with numbers, untethered from what actually happens.

Quentin

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 5, 2025, 5:46:27 PMFeb 5
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 9:16 AM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Quantum mechanics has explanatory power because it provides accurate predictions and a framework for modeling reality. The problem isn’t with quantum mechanics itself—it’s with trying to reconcile probability with a single-history universe where only one sequence of events ever occurs.

In a framework where only one history unfolds, probability is purely descriptive—it does not explain why this history, rather than any other, is the one that exists.

You seem to have difficulty with the concept of a completely random event-- one that does not have a 'classical' mechanistic explanation. Sorry about that, but quantum events have a tendency to be completely random (within a well-defined probability distribution).

It assigns numbers to theoretical possibilities that never had a chance of being real. You keep asserting that probabilities are meaningful in a single-history view, but meaningful in what sense? If a certain event, despite being assigned a 30% probability, never happens in the one realized history, then in what sense was it ever a possibility?

In the sense that it will happen approximately 30% of the time in repeated trials. You object to this answer because there can always be low-probability events that never actually happen in any finite sequence of trials. I agree, but that is just the nature of probability....

In contrast, in a framework where all possibilities are realized, probability maintains a clear meaning: it describes the relative measure of outcomes across the full set of realized possibilities. In that case, probability is tied to something real, rather than just being a tool we use to pretend that nonexistent possibilities matter.

Unfortunately, the idea that all possibilities are realized on every trial is in direct conflict with the Born rule for probabilities. To demonstrate this, let me spell out a specific example. Consider an experiment on a two-state system, with a wave function of, say,

    |psi> = a|0> + b|1>,

where a^2 + b^2 = 1 specifies the normalization of the state. If we now measure this state according to the variable with eigenstates |0> and |1> we get two branches, one with outcome '0', and the other with outcome '1'. Now repeat the experiment in each branch, so we get four branches, with outcomes '00', '01', '10', and '11', respectively. Repeat this N times and you find 2^N branches, covering all possible binary sequences. Note that this result does not depend on the coefficients 'a' and/or 'b' in the above wave function. So you get the same 2^N branches whatever the coefficients.

But the Born rule says that the probability that you observe any particular sequence depends on the squares of the coefficients, and the number of each coefficient depends on the numbers of '0's and '1's in the branch you happen to be on. Since, in the multiverse framework, the branch you happen to be on is random (determined by some self-location probability -- uniform probability over all branches in this case), it is very unlikely that the relative numbers of '0's and '1's in your branch happens to agree with the Born probabilities: In fact the probability that you will see the Born probabilities vanishes as 1/{2^N }as N becomes large. The fact that experiments in quantum mechanics universally obtain results that agree with the Born probabilities is, therefore, inexplicable in the many-worlds model.

Bruce

Quentin Anciaux

unread,
Feb 5, 2025, 5:57:05 PMFeb 5
to everyth...@googlegroups.com
Bruce,

You’re trying to reduce the issue to my supposed "difficulty" with randomness, but that’s not the point. The problem isn’t whether quantum events are random—it’s whether probability has a meaningful foundation in a single-history universe where only one sequence of events is ever realized.

You keep appealing to repeated trials, but even with infinite repetitions, some events with nonzero probability will never occur in the one and only history that unfolds. That’s not a minor detail—that’s a fundamental contradiction in the way probability is treated in a single-world framework. If an event assigned a 30% probability never happens, then its "probability" was meaningless in any real sense. It was never a real possibility, just a number in an equation.

Now, regarding the Born rule: You claim that MWI contradicts it, but your argument assumes that every possible branch must exist in equal measure, which is not what MWI predicts. The structure of the wavefunction naturally leads to branches that reflect the Born probabilities because those branches are weighted according to the squared amplitudes. It’s not about equal-counted branching—it’s about the distribution of measure across branches, which naturally results in Born-rule outcomes.

Your argument also ignores the fact that in a single-history universe, the Born rule is just an imposed rule with no deeper explanation. Why do probabilities follow this rule in a framework where only one history exists? What forces the realized history to match the expected distribution? If probabilities are just random assignments with no deeper foundation, then their success in predicting experimental results is equally mysterious in a single-history view.

MWI provides an actual mechanism for why the Born rule emerges: it follows from the structure of the wavefunction itself. Your argument, on the other hand, assumes the Born rule as a brute fact without explaining why a single realized history should respect it in the first place. That’s not an explanation—it’s just asserting that the math works and ignoring the deeper implications.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Feb 5, 2025, 6:28:08 PMFeb 5
to Everything List
On Wednesday, February 5, 2025 at 3:36:11 PM UTC-7 Quentin Anciaux wrote:


Le mer. 5 févr. 2025, 22:25, Alan Grayson <agrays...@gmail.com> a écrit :


On Wednesday, February 5, 2025 at 1:36:55 PM UTC-7 Quentin Anciaux wrote:
Brent,

I went through the document you sent, and it outlines the different interpretations of probability: mathematical, physical symmetry, degree of belief, and empirical frequency. But none of these resolve the core issue in a single-history universe—where probability is supposed to describe "possibilities" that, in the end, never had any reality.

Your frequentist approach assumes that, given enough trials, outcomes will appear in proportions that match their theoretical probabilities. But in a finite, single-history universe, there is no guarantee that will ever happen. Some events with nonzero probability simply won’t occur—not because of statistical fluctuations, but because history only plays out one way. In that case, were those possibilities ever really possible? If something assigned a probability of 10% never happens in the actual course of the universe, then in what meaningful way was it ever a possibility?

You argue that if all possibilities are realized, probability loses its meaning. But in a single-history world, probability is just as meaningless because it describes outcomes that never had a chance of being real. If probability is supposed to quantify potential realities, then in a framework where only one reality exists, probability is nothing more than a retrospective justification—it has no actual explanatory power.

The math remains internally consistent, but it becomes an empty formalism, detached from anything real. The whole structure relies on pretending that unrealized events still "exist" in some abstract sense, even though they never affect reality. That’s the contradiction at the heart of the single-history view. It uses probability to describe possibilities while simultaneously denying that those possibilities ever had a chance to be real.

Why do you assume that some non-zero probabilities never occur? How could you know this? Meanwhile, you prefer a theory, MIi, that can't be verified. Puzzling preferences. AG 

AG,

I assume that some nonzero probabilities never occur because, in a single-history universe with finite time and a unique trajectory, there is no guarantee that every possible outcome will ever be realized. If history unfolds in only one way, then there will inevitably be events assigned nonzero probability that simply never happen. That’s not an assumption—it’s an unavoidable consequence of having only one realized history.

That's an assumption. What isn't an assumption is that the worlds of the MWI can never be contacted. This is your preference. AG 

Bruce Kellett

unread,
Feb 5, 2025, 6:46:43 PMFeb 5
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 9:57 AM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

You’re trying to reduce the issue to my supposed "difficulty" with randomness, but that’s not the point. The problem isn’t whether quantum events are random—it’s whether probability has a meaningful foundation in a single-history universe where only one sequence of events is ever realized.

You keep appealing to repeated trials, but even with infinite repetitions, some events with nonzero probability will never occur in the one and only history that unfolds. That’s not a minor detail—that’s a fundamental contradiction in the way probability is treated in a single-world framework. If an event assigned a 30% probability never happens, then its "probability" was meaningless in any real sense. It was never a real possibility, just a number in an equation.

That is not a good argument. If something of supposed probability 30% does not occur in,say, 100 trials, then your prior estimate of the probability is wrong. Probability theory tells you how many occurrences of low-probability events you can expect in a particular number of trials. If those probability estimates are not fulfilled, then your prior probability estimates are wrong. So it is wrong to say that low probability events will never ccur, no matter how many trials you run. Probability theory tells you what you can expect, and when low probability events can be expected to occur (or not occur) in some sequence of trials. Many worlds theory can do no better than this, because it says that you ill never see those low probability branches, even if they exist. I don't see that this gets you any further ahead.

Now, regarding the Born rule: You claim that MWI contradicts it, but your argument assumes that every possible branch must exist in equal measure, which is not what MWI predicts. The structure of the wavefunction naturally leads to branches that reflect the Born probabilities because those branches are weighted according to the squared amplitudes.

That is not true. If your theory, following Everett, is that the Schrodinger equation is all that there is, then it is a fact that the Schrodinger equation is insensitive to the coefficients, so branches do not get weighted in the way you assume. IThe claim that branches are 'weighted' by the coefficients is an additional assumption -- equivalent to the assumption of the  Born rule.

It’s not about equal-counted branching—it’s about the distribution of measure across branches, which naturally results in Born-rule outcomes.

That is something that has to be proved, and despite many efforts, it still remains an unproven assumption.

Your argument also ignores the fact that in a single-history universe, the Born rule is just an imposed rule with no deeper explanation. Why do probabilities follow this rule in a framework where only one history exists? What forces the realized history to match the expected distribution?

Nothing 'forces' the realized history to match the Born rule expectations. The Born rule is an observed fact, and it is an assumption of the theory --  a brute fact about probabilities if you like. There is no deeper explanation for random occurrences.

If probabilities are just random assignments with no deeper foundation, then their success in predicting experimental results is equally mysterious in a single-history view.

MWI provides an actual mechanism for why the Born rule emerges: it follows from the structure of the wavefunction itself.

That is simply not true. You might like it to be the case, but it has never been shown to be true. If it is true, you can give the proof here --  physics would be delighted.....

Your argument, on the other hand, assumes the Born rule as a brute fact without explaining why a single realized history should respect it in the first place. That’s not an explanation—it’s just asserting that the math works and ignoring the deeper implications.

That is the way it is. The Born rule is just a brute fact, and since it is a probability theory, there is no deeper 'mechanical' explanation.

Bruce

Quentin Anciaux

unread,
Feb 5, 2025, 7:05:54 PMFeb 5
to everyth...@googlegroups.com
Bruce,

Let’s take your own argument about probability and push it to its logical conclusion. You said that if something with a 30% probability doesn’t happen in a given set of trials, that just means the prior probability estimate was wrong. Fine. Now, let’s apply that logic to a real-world scenario.

Imagine an asteroid is heading toward Earth, and based on all available data, models predict it has an 80% probability of impact. Yet, somehow, it doesn’t hit. By your reasoning, this means that the 80% estimate must have been wrong—because in the single-history universe, only what actually happens matters. The probability was just a number assigned to something that never had any reality.

But this raises an obvious problem: what is probability even describing in a single-history framework? If probabilities are supposed to quantify real possibilities, yet some of them never happen despite high probability assignments, then those probabilities were meaningless from the start. The asteroid example makes it clear—if a highly probable event doesn’t occur, it wasn’t a real possibility in any meaningful sense. It was just a mathematical expectation that reality never fulfilled.

In a multiverse framework, this isn’t an issue because the probabilities describe actual distributions of events across different branches. There exist branches where the asteroid hits and others where it doesn’t, and the 80% probability corresponds to the fraction of branches where impact occurs. But in a single-history framework, that 80% was just an empty number—nothing ever "happened" with 80% likelihood because only one outcome was ever real.

Your argument boils down to saying, "Probability theory tells us what we should expect, but if reality doesn’t match, then the prior probability was wrong." But this means probability has no independent explanatory power—it is just a bookkeeping trick that retroactively adjusts itself to match what already happened. That’s not an actual explanation of events; it’s just a way of pretending probability still means something when it clearly doesn’t in a single-history world.

So tell me: in a single-history universe, if the asteroid doesn’t hit despite an 80% probability, was it ever actually an 80% chance event? Or was that probability just an illusion, describing something that was never going to happen in the only history that exists?

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 5, 2025, 7:39:05 PMFeb 5
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 11:05 AM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Let’s take your own argument about probability and push it to its logical conclusion. You said that if something with a 30% probability doesn’t happen in a given set of trials, that just means the prior probability estimate was wrong. Fine. Now, let’s apply that logic to a real-world scenario.

Imagine an asteroid is heading toward Earth, and based on all available data, models predict it has an 80% probability of impact. Yet, somehow, it doesn’t hit. By your reasoning, this means that the 80% estimate must have been wrong—because in the single-history universe, only what actually happens matters. The probability was just a number assigned to something that never had any reality.

You have changed the nature of the problem. In the first instance it was that an event of probability 30% would never happen in a series of trials. I countered by saying that if it didn't happen in, say, 100 trials, then the initial probability estimate was wrong.

The asteroid case is different in that there is only ever one trial. If the  chance of hitting is calculated to be 80%, and it doesn't hit. That merely means that the 20% chance of missing is realized. It does not imply that the initial estimates were wrong -- it merely implies that the earth was lucky. You have confused single event probabilities with repeated trial probability.

But this raises an obvious problem: what is probability even describing in a single-history framework? If probabilities are supposed to quantify real possibilities, yet some of them never happen despite high probability assignments, then those probabilities were meaningless from the start. The asteroid example makes it clear—if a highly probable event doesn’t occur, it wasn’t a real possibility in any meaningful sense. It was just a mathematical expectation that reality never fulfilled.

Single event probabilities are not the same as probabilities in a series of trials. You are just confusing things.

In a multiverse framework, this isn’t an issue because the probabilities describe actual distributions of events across different branches. There exist branches where the asteroid hits and others where it doesn’t, and the 80% probability corresponds to the fraction of branches where impact occurs. But in a single-history framework, that 80% was just an empty number—nothing ever "happened" with 80% likelihood because only one outcome was ever real.

Your argument boils down to saying, "Probability theory tells us what we should expect, but if reality doesn’t match, then the prior probability was wrong." But this means probability has no independent explanatory power—it is just a bookkeeping trick that retroactively adjusts itself to match what already happened. That’s not an actual explanation of events; it’s just a way of pretending probability still means something when it clearly doesn’t in a single-history world.

In fact, what we assume is the Born rule which says that the probability of a particular outcome is just the absolute square of the corresponding coefficient in the wave function.

Many worlds theory does not have any comparable way of relating probabilities to the properties of the wave function. In fact, if all possibilities are realized on every trial, the majority of observers will get results that contradict the Born probabilities.

So tell me: in a single-history universe, if the asteroid doesn’t hit despite an 80% probability, was it ever actually an 80% chance event? Or was that probability just an illusion, describing something that was never going to happen in the only history that exists?

Yes, assuming that the calculations were accurate, then there certainly was an 80% chance of hitting, and a 20% chance of missing. It happened that the 20% chance was realized in this one-off trial. Whis is not to say that that would be the outcome in repeated trials of the same event. Unfortunately, repeats of unique events are seldom possible.

Bruce

Brent Meeker

unread,
Feb 5, 2025, 7:46:57 PMFeb 5
to everyth...@googlegroups.com



On 2/5/2025 12:53 AM, Quentin Anciaux wrote:
Bruce,

Repeated experiments don’t change the core issue. Even if you perform an experiment a trillion times, in a single-history universe, there is still only one realized sequence of outcomes. That means certain possibilities with greater than zero probability will simply never happen—not just in a given run, but ever.
So you want to count distributions over inaccessible worlds who's only claim on existence is they keep all the solutions to Schroedingers equation.  But you  don't want to count distributions over time because you can think of them as a sequence...even though the probabilities applied to single events.

Brent

Quentin Anciaux

unread,
Feb 5, 2025, 7:49:05 PMFeb 5
to everyth...@googlegroups.com
Bruce,

You’re making a distinction between single-event probabilities and repeated trials, but you’re not addressing the core issue: in a single-history universe, probability is only ever descriptive, not explanatory. You claim that if an asteroid has an 80% chance of impact but doesn’t hit, then the 20% chance was simply realized. But this explanation is entirely retrospective—it tells us nothing about why this history, rather than any other, is the one that unfolded.

You say, "assuming the calculations were accurate, then there certainly was an 80% chance of hitting, and a 20% chance of missing." But in what sense was that 80% ever real? In the only history that exists, the asteroid never had an 80% chance of hitting—it always had a 100% chance of missing because that’s what happened. The probability was just a number assigned before the event, with no actual force in determining the outcome.

In a multiverse framework, probabilities are grounded in actual distributions across histories. The 80% means that in 80% of branches, the asteroid hits, and in 20%, it misses. This gives probability an explanatory role—it describes the structure of reality, not just an arbitrary number assigned to something that never had a chance of happening.

You claim that MWI has no way to connect probabilities to the wavefunction, but that’s false. The structure of the wavefunction naturally assigns measure to branches, and those measures correspond to the squared amplitudes of the coefficients—the Born rule emerges from this structure. You keep asserting that probabilities in MWI are meaningless because "all possibilities happen," but that’s only true if you ignore the fact that measure matters. Not all branches are weighted equally, and the frequencies of outcomes reflect those weights.

The issue isn’t whether we can calculate probabilities in a single-history world—it’s whether those probabilities have any real ontological meaning. You claim that in a single-history world, probabilities "just work," but that’s not an explanation. It’s just a way of pretending that numbers assigned before an event have some deeper reality when, in truth, they don’t. In the end, the only thing that exists is the one history that happens, and everything else was just an illusion of possibility.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 5, 2025, 7:57:30 PMFeb 5
to everyth...@googlegroups.com
Brent,

The difference is fundamental: distributions over time in a single-history universe are not the same as distributions over actualized possibilities. In a single-history world, time only ever produces one sequence of events. If an outcome with a supposed 10% probability never occurs, then in what sense was it ever truly a possibility? It wasn’t—it was just a number assigned to something that never happened and never was going to happen in this one realized history.

In contrast, in a framework where all possibilities are realized, probability retains its full meaning because it describes the structure of the wavefunction across all branches. The distribution is not just an abstract mathematical expectation—it corresponds to actual occurrences. The probabilities aren’t just numbers assigned before an event; they describe real proportions of outcomes across reality.

You claim I “don’t want to count distributions over time,” but that’s not the issue. The issue is that in a single-history universe, probability is always a retrospective descriptor with no causal power. You act as if probability applies to single events in isolation, but that only works if you assume that unrealized possibilities somehow “mattered” despite never actually happening. That’s the contradiction: single-history probability calculations reference things that were never part of reality, yet claim to describe reality.

If you want to argue that probability is meaningful in a single-history framework, then explain this: if an outcome has a calculated probability of 10% but never occurs in the one and only history that unfolds, was it ever a real possibility? If yes, then where is that possibility? If no, then what was probability actually describing?

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 5, 2025, 8:09:17 PMFeb 5
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 11:49 AM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

You’re making a distinction between single-event probabilities and repeated trials, but you’re not addressing the core issue: in a single-history universe, probability is only ever descriptive, not explanatory. You claim that if an asteroid has an 80% chance of impact but doesn’t hit, then the 20% chance was simply realized. But this explanation is entirely retrospective—it tells us nothing about why this history, rather than any other, is the one that unfolded.

That is the nature of random events. It seems that your real objection is to randomness, events that have no simple mechanical explanation. That is quantum mechanics, and you just have to get used to it.

You say, "assuming the calculations were accurate, then there certainly was an 80% chance of hitting, and a 20% chance of missing." But in what sense was that 80% ever real? In the only history that exists, the asteroid never had an 80% chance of hitting—it always had a 100% chance of missing because that’s what happened. The probability was just a number assigned before the event, with no actual force in determining the outcome.

I think the calculations are based on prior experience. They are real enough, not just empty air.

In a multiverse framework, probabilities are grounded in actual distributions across histories. The 80% means that in 80% of branches, the asteroid hits, and in 20%, it misses. This gives probability an explanatory role—it describes the structure of reality, not just an arbitrary number assigned to something that never had a chance of happening.

Unfortunately, it is well known that branch counting is a failed enterprise in quantum mechanics. So the claim that something or the other is true in 80% of the branches is just empty rhetoric.

You claim that MWI has no way to connect probabilities to the wavefunction, but that’s false. The structure of the wavefunction naturally assigns measure to branches,

Does it now? And how does it do that? You have a well-developed ability to make endless unevidenced assumptions and bend them to your will. Start trying to prove some of this!


and those measures correspond to the squared amplitudes of the coefficients—the Born rule emerges from this structure.

It does not without many additional assumptions. The attempts to derive the Born rule from Everett have all failed.

You keep asserting that probabilities in MWI are meaningless because "all possibilities happen," but that’s only true if you ignore the fact that measure matters. Not all branches are weighted equally, and the frequencies of outcomes reflect those weights.

But you have not shown how these weights arise, or how outcomes depend on these 'weights'.

The issue isn’t whether we can calculate probabilities in a single-history world—it’s whether those probabilities have any real ontological meaning. You claim that in a single-history world, probabilities "just work," but that’s not an explanation. It’s just a way of pretending that numbers assigned before an event have some deeper reality when, in truth, they don’t. In the end, the only thing that exists is the one history that happens, and everything else was just an illusion of possibility.

You are still looking for an 'explanation' of random events. You will look in vain, because no such explanation can be forthcoming.

Bruce

Brent Meeker

unread,
Feb 5, 2025, 8:21:59 PMFeb 5
to everyth...@googlegroups.com



On 2/5/2025 5:27 AM, John Clark wrote:
On Tue, Feb 4, 2025 at 6:22 PM Brent Meeker <meeke...@gmail.com> wrote:

If all possibilities were realized the wouldn't have probabilities assigned to them...exactly the problem that arises in MWI.

You've forgotten that it's not just an electron that is a quantum object and thus part of the Universal Wave Function (UWF),  you are also part of the UWF. There are an astronomical number of branches of the UWF, perhaps an infinite number, and those branches do not interact with each other and thus can be interpreted as separate "worlds".
So why do you postulate an infinite number of worlds?  Most MWI advocates relate the number to measurement outcomes, of which there are only a few.

You the observer are stuck in just one of those branches and thus lack sufficient information to know if you are in the branch where the cat is alive or the branch where the cat is dead, you need to open the box and look in to get that information, before that you do what you always do when you don't have enough information to be certain, you work with probabilities.
Are you claiming that there are no inherently probabilistic events, e.g. nuclear decay, and it's just a matter of ignorance?


  

The quantum bomb tester demonstrates it is possible to obtain information about an object without interacting with it in any way, the bomb does explode in some branches (a.k.a. worlds) of the UWF but if you set things up properly you the observer will be in a branch where the bomb did NOT explode and yet you know for certain the bomb is working properly and will explode if it detects even one photon. Many Worlds can easily explain how interaction free measurement could work, and do so without invoking some sort of ill defined wave function collapse, by simply acknowledging that all outcomes occur each in its own independent branch of the UWF. But the competitors of Many Worlds struggle to give an intuitive explanation of how interaction free measurement could possibly work. And this is important!
Single world theories also easily explain how "interaction free" measurements work.  They work probabilistically.


Years ago in high school physics I was taught a derivation of Heisenberg's Uncertainty Principle that started from the assumption that you'd have to use photons to detect something and that would always disturb what you're looking at, but I now know that derivation was invalid; it got the right answer but for the wrong reason. The real reason is due to the mathematical structure of quantum mechanics, the uncertainty principle is derived from the non-commuting nature of observable operators, like position and momentum, or energy and time.
So what?  I've known that since sophomore physics.


I've also heard the “using photons to detect something disturbs it” argument to explain why Maxwell's Demon does not violate the Second Law Of Thermodynamics,
I think that was an argument for the HUP not Landauer's principle.  But so what?  Are you just reminiscing?

Brent

Brent Meeker

unread,
Feb 5, 2025, 8:30:53 PMFeb 5
to everyth...@googlegroups.com



On 2/5/2025 5:53 AM, John Clark wrote:
On Wed, Feb 5, 2025 at 5:48 AM Bruce Kellett <bhkel...@gmail.com> wrote:
 
The sequence of N  repetitions gives you an estimate of the probability distribution.

OK, but even if N=1, such as when I flip a coin just once, I can still obtain a probability of it coming up heads that makes sense.
But Quentin is asserting that if it comes up heads then the probability of heads was 1 and the probability of tails was 0.  Thus completely misapplying the concept of probability.


And the long-run relative frequency of an event occurring in repeated trials is just one way to define probability, you could also say it's the ratio of all possible favorable outcomes to all possible outcomes, or as a degree of belief based on available evidence that can be updated when new evidence becomes available.
That's right.  It's a common confusion that probability is a thing and we must correctly grasp what it is.  But probability is more like energy or value.  It means different things that only have in common that they obey the mathematical rules of probability.  And that fact is the strength of the theory.  Relative frequency in one context can be interpreted as rational degree of belief in a another.  It can be inferred from symmetries and applied to bets.

Brent

Brent Meeker

unread,
Feb 5, 2025, 9:43:37 PMFeb 5
to everyth...@googlegroups.com



On 2/5/2025 12:10 PM, Quentin Anciaux wrote:
Brent,

You're arguing that probabilities in a single-world framework are as real as those of observed events because they are derived from the same equations. But if only one history ever happens, then unrealized possibilities are just numbers in a calculation, not something that ever had a chance of being real.
But that's exactly wrong.  They are in the calculation precisely because they did have a chance of being real.  You keep leaning on "only one history happens".  But the probabilities are for the individual events.  The probability of a die landing : is 1/6 .


The theory predicts probabilities, but what actually occurs is just one unique sequence of events. The rest—no matter how formally predicted—never existed in any form beyond the equations.
It existed as a possibility.  Your theory implies that every event is deterministic, which implies a simple close minded rejection of the concept of probability.


You claim that this does not imply determinism, but the fact remains that only one history ever unfolds.
So what.  Would it help that two histories unfolded?  If so, just divide your one history in half.


Whether the process is called "random" or not, in practical terms, there is no actual underlying ensemble of events
First, a sequence in time is just as much an ensemble as set in space: Whether you throw a die ten times or you throw ten dies at once.  Second, that's your misunderstanding that probability can only apply to ensembles.  I assume you've flown on an airliner.  Did you consider the possibilty of it crashing?  If so then you must have considered the probability of that occurrence, even though you could not take that flight more than once.


—there is just the one sequence that reality plays out. That makes probability, in this framework, purely descriptive of an imagined set of possibilities that never had any ontological status.
But not just "imagined".  They are imagined as consistent with physical theory and their probability can be directly calculated in some cases and in others is estimated from statistics.  You have an impoverished view of probability, imagining it only applies to frequency within an ensemble.  But it also applies to degree of rational belief and quantum mechanical events.

You say that unrealized events are "part of reality probabilistically," but what does that even mean when they never actually happen? If an event is assigned a 60% probability but never occurs in the only history that exists, then in what sense was it ever a real possibility?
Suppose it does occur, then in what sense was it's non-occurence a possibility?    You've adopted an impovoerished view in which there is no such thing as probability and you can never flip a coin with probability 0.5 it will come up heads.


It was just an abstract calculation with no actual link to reality.
You keep writing that, which is what I point to as just emotive argument.  1) all calculation is abstract, that's what makes it universally applicable.  2+2=4 no matter what we're counting.  2) That's simply false.  Calculated probability are linked to reality in many different ways.  Some, like die rolls and coin flips are based on physical symmetry.  Others, like quantum events are based on the height of energy barriers.  Some are inferred from statistics.  They are all linked to reality...unlike multiple-worlds whose only link is an inability to conceive of the Born Rule outside of a frequentist interpretation.


You keep referring to long strings of experiments as if an infinite series of trials is guaranteed to sample all possibilities—but in a finite universe with a unique history, that is simply not true.
Well maybe it was before you were born, but philosophers of mathematics used to argue that probabilities only referred to infinite sequences of events...in much the same way you want to refer to infinite ensembles.



Your attempt to equate probability with other concepts like energy or entropy fails because those are directly observable and quantifiable properties of physical systems.
The are no more directly observable than probability.  Have you ever seen an entropy meter?  How does it work.  How would you measure the energy in a glass of water?  Probability theories are tested exactly as you would any physical theory.  If the Stern-Gerlach says half the silver atoms will go up you run thru enough silver atoms to test it.  You don't say, "Oh I can't test it because every sequence of UP and DOWN is unique.


Probability, in a single-history framework, is not a property of the world—it’s a mental construct we impose on it. It’s not like energy or momentum; it’s a way of reasoning about things that will never actually exist.
I want to play cards with you.  You must be terrible at poker.



MWI, on the other hand, does not set all probabilities to 1 arbitrarily.
Now you're changing you story.  Before whatever happened had probability 1, nothing else could hav


It gives probability a real foundation by making it about relative frequencies across real histories.
What's "real" about the histories.  They are just imagined and their number and frequency is just inferred from Born's (abstract) probability calculation


There, probabilities describe distributions of actualized outcomes,
"Actualized" that are never actual.  You have a way with words.


not abstract unrealized ones. In contrast, in a single-world view, probability is just a way of pretending that things that never existed somehow mattered. That is the contradiction you keep glossing over.
Because they could have existed, just like your "actualized" but not actual outcomes.  That's what probability means; it means something could be but not necessarily be.  You are trying to banish the concept of probability by "actualizing" everything; but this fails because then there is no meaning to the Born Rule.

Brent

Brent Meeker

unread,
Feb 6, 2025, 12:01:39 AMFeb 6
to everyth...@googlegroups.com



On 2/5/2025 12:36 PM, Quentin Anciaux wrote:
Brent,

I went through the document you sent, and it outlines the different interpretations of probability: mathematical, physical symmetry, degree of belief, and empirical frequency. But none of these resolve the core issue in a single-history universe—where probability is supposed to describe "possibilities" that, in the end, never had any reality.
"in the end" implies post-hoc judgement.  When you calculate and apply probabilities you don't know which events will be realized.  That's why they are probabilities.


Your frequentist approach assumes that, given enough trials, outcomes will appear in proportions that match their theoretical probabilities.
Which is why some philosophers of mathematics tried to define probabilities as long-run (-> infinity) frequencies.


But in a finite, single-history universe, there is no guarantee that will ever happen.
And there's no guarantee some possibility you've overlooked will occur.  Forget histories.  Suppose your friend  has drawn a card, 6 of Spades, and now you're going to draw a card and high card wins.  What odds are you willing to give him? 

Some events with nonzero probability simply won’t occur—not because of statistical fluctuations, but because history only plays out one way. In that case, were those possibilities ever really possible? If something assigned a probability of 10% never happens in the actual course of the universe, then in what meaningful way was it ever a possibility?
It's an application of a theory.  Of course it can be mis-applied.  You might leave out a possibility that actually happens.


You argue that if all possibilities are realized, probability loses its meaning. But in a single-history world, probability is just as meaningless because it describes outcomes that never had a chance of being real.
How is that different that describing outcomes that occur where nobody can check that they happened, that are, in your words, just abstractions.  And they did have a chance of being real, which you would realize if you knew what " a chance" means."


If probability is supposed to quantify potential realities, then in a framework where only one reality exists, probability is nothing more than a retrospective justification—it has no actual explanatory power.

The math remains internally consistent, but it becomes an empty formalism, detached from anything real.
Don't take any money to a poker game.


The whole structure relies on pretending that unrealized events still "exist" in some abstract sense,
Which is better than pretending that whole unobservable, inaccessible really, really exist for real...they just don't make any difference to anything.

Brent

Quentin Anciaux

unread,
Feb 6, 2025, 2:45:02 AMFeb 6
to everyth...@googlegroups.com
Bruce,

You keep insisting that randomness "just is" and that no deeper explanation is possible, but that’s precisely the problem with the single-history view: it reduces probability to a descriptive afterthought with no fundamental meaning. You argue that in a single-history universe, we must simply accept that an event had an 80% chance of happening even if it never does. But in what sense was that probability real if only one history ever unfolds and the event never occurs?

You say that probabilities are "real enough" because they are based on prior experience. But prior experience in a single-history universe is just another way of saying, "this is what happened before." That’s not an explanation; it’s circular reasoning. You’re using past outcomes to justify probability assignments, but if probability is supposed to describe potential events, then what does it mean when a "possible" event never happens, despite being assigned a nonzero probability?

In a multiverse framework, probability describes the relative frequency of events across actualized branches. It is not just an abstract expectation—it is grounded in the structure of the wavefunction itself. You dismiss the idea that measure in the wavefunction corresponds to probability, but this is not an assumption—it follows naturally from the mathematics of quantum mechanics. The Born rule is not an extra assumption in MWI; it emerges from decision theory (Deutsch-Wallace), from symmetry arguments (Zurek’s envariance), or from self-locating uncertainty. You keep demanding "proof," yet you accept the Born rule as a brute fact in your own framework without any justification beyond "that’s just how quantum mechanics works."

Meanwhile, your single-history approach provides no mechanism for why probabilities match experimental results. You claim that probability "just works" without explaining why the realized history should respect the Born distribution at all. You dismiss branch weighting in MWI as unproven, yet you offer no competing explanation for why a single sequence of events should follow probabilistic predictions.

Ultimately, your position amounts to: "Random events happen, probabilities just work, and there’s no deeper reason for anything." That’s not an explanation—it’s an assertion that we shouldn’t ask questions. If you’re satisfied with that, fine, but let’s not pretend it’s a superior foundation for probability. It’s just giving up on understanding why reality follows probabilistic laws in the first place.

Quentin

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 6, 2025, 2:47:59 AMFeb 6
to everyth...@googlegroups.com
Brent,

Your response is full of rhetorical flourishes, but it still doesn’t address the fundamental issue: in a single-history universe, probability describes things that never had any reality and never could have. You claim that probabilities "could have existed," but in a single history, that’s false—only one sequence ever occurs, and the others were never anything more than abstract labels assigned before the fact.

You compare probability to entropy and energy, but that analogy fails because entropy and energy can be measured within a single history. They are properties of physical systems that directly affect outcomes. Probability, in contrast, is supposed to describe potentiality—but in a single-history world, there is no real potentiality, only the one realized sequence. That means probabilities are just an abstract exercise in imagination rather than something that refers to anything in reality.

You mock MWI by calling its histories "imagined," yet you rely on equally imagined possibilities in a single-history world to justify probability. The difference is that in MWI, probabilities describe real distributions of outcomes across actualized branches, whereas in a single-history world, probability is just a way to pretend that things that never happened somehow mattered.

You keep bringing up poker as if probability in a single-history world is meaningful in the same way. But in a single-history framework, every game ever played follows only one sequence of outcomes, and any probability assigned to a hand was just a mental construct—it never had any effect on what actually happened. If you replayed history, there’s no guarantee any given probability assignment would be borne out, because only one history ever occurs.

You say I’m "changing my story" on probabilities in MWI, but that’s just another misrepresentation. MWI does not assign probability 1 to everything—it assigns measure to branches, which correspond to relative frequencies of outcomes. That’s not an arbitrary assumption—it follows from the structure of the wavefunction itself. You dismiss that without engaging with the actual derivations (Deutsch, Wallace, Zurek), yet you accept the Born rule as a brute fact in a single-history world.

Ultimately, your position amounts to: probability "just works," even though it describes things that never happened and never will. You claim I "banish probability" by actualizing everything, but in reality, it’s the single-history view that renders probability meaningless. It turns probability into a convenient fiction rather than something that reflects the structure of reality. That’s the contradiction you keep dodging.



Quentin Anciaux

unread,
Feb 6, 2025, 2:51:18 AMFeb 6
to everyth...@googlegroups.com
Brent,

You're leaning heavily on the idea that probability is meaningful simply because it's applied before knowing the outcome. But that doesn't address the real problem: in a single-history world, probability isn’t describing actual potentialities—it’s describing imagined ones that were never part of reality. You claim that "they did have a chance of being real," but in a framework where only one history ever unfolds, that’s just wordplay. They never happened, and they never were going to happen, because the one sequence of reality had already been determined.

You mock the idea of unobservable branches in MWI, but at least those branches correspond to real structures in the wavefunction. In contrast, your framework uses probabilities that refer to nothing but hypothetical scenarios that never had any ontological status. You want probability to describe "a chance," but in a single-history universe, those chances were never real—they were just numbers assigned before the event, with no deeper meaning beyond prediction.

Your card game example doesn’t help you. You ask what odds I’d give if my opponent draws a 6 of Spades. Fine. But in a single-history world, only one game will ever be played, and only one sequence of cards will ever be drawn. No matter what probability I assign beforehand, if in this single history I never draw a card higher than a 6, then those probabilities were just empty formalism. They described something that never had a chance of occurring because it never did and never would.

You dismiss MWI’s branches as "unobservable," yet you rely on equally unobservable "chances" in a single-history world. The difference is that in MWI, probabilities describe distributions over real branches, while in a single-history world, probability is just a way of pretending that things that never happened somehow mattered.

In short, your position requires believing that probability describes possibilities that never existed and never will, yet somehow remain meaningful. That’s incoherent. If probability is supposed to describe reality, it needs something real to refer to—not just imagined alternatives that were never anything more than numbers in an equation.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

PGC

unread,
Feb 6, 2025, 4:29:29 AMFeb 6
to Everything List
Quentin, you are perhaps too generous to believe in discussion here. Consider halting the defense of many histories and go on offense: What principle selects that singular outcome? What privileges it above all others and by what means/mechanism? There is no coherent answer without invoking a god, a Brent, an Alan, Cosmin, Trump, Spud, Giulio, it just happens etc. who just proclaim it on some creation day - and it was good. Closet bible worshippers. Apologies, but the types of fallacious argument (ad verecundiam) they rely on, are the same, even if not regarding this exact proposition. 

John Clark

unread,
Feb 6, 2025, 7:09:26 AMFeb 6
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 1:42 PM Alan Grayson <agrays...@gmail.com> wrote:
 
 >>> there is no interaction between branches and there is no causal link.

>> TrueAnd that is exactly why those independent branches of the Universal Wave Function can be thought of as independent worlds. 

And why the MWI is unverifiable and tantamount to a fantasy. AG

Unverifiable is not synonymous with fantasy. Many theories make unverifiable predictions, but that is not how they are judged, they are judged by the number of verifiable predictions that have been experimentally proven to be correct, and if even one verifiable prediction has been proven to be wrong then the scientific method judges the theory to be wrong. By that criteria Schrodinger's equation has proven itself to be a huge success. So it would be foolish to place those unverifiable other worlds that a super successful equation predicts in the same category as a Harry Potter novel. 

Also, I am absolutely convinced, from the mathematics and from the fact that it's been experimentally verified, that the quantum bomb tester works, it really is possible to use a photon to detect the presence of an object without the photon interacting with the object or the object interacting with the photon; BUT without those other worlds it's impossible, at least for me, to have an intuitive understanding of WHY it works.  I could say the same thing about the Quantum Zeno Effect.

Regardless of the old cliché about a watched pot never boiling, the time it takes to boil a pot of water really doesn't change depending on if you are watching it or not, but in the weird quantum world you really CAN delay the decay of a radioactive atom if you watch it closely enough, and Many Worlds has no problem whatsoever explaining how this "Quantum Zeno Effect" works.   

Suppose an atom has a half-life of one second and I'm watching it, the universe splits and so do I after one second. In one universe the atom decays and I observed that the atom has decayed, in the other universe the atom has not decayed and I observed that it has not decayed. 

In the universe where the atom didn't decay after another second the universe splits again, and again in one universe it decays but in the other it has not, it survived for 2 full seconds. So there will be a version of me that observes this atom, which has a one second half-life, surviving for 3 seconds, and 4 seconds, and 5 years, and 6 centuries, and you name it. By utilizing a series of increasingly complex and difficult procedures it is possible for the lab (and you) to be in the universe that contains labs and versions of yourself that see the atom surviving for an arbitrarily long length of time. But the longer the time past its half-life the more splits are involved, and the more difficult the experiment becomes.  Soon it becomes ridiculously impractical to go further, but it's never fundamentally impossible. 

If forming a mental picture about what's going on in a physical process is not important to you then Shut Up And Calculate (a.k.a. Copenhagen, a.k.a. Bayes) is fine, but personally I'd like a little bit more.
 John K Clark    See what's on my new list at  Extropolis
zeq


John Clark

unread,
Feb 6, 2025, 7:31:23 AMFeb 6
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 8:21 PM Brent Meeker <meeke...@gmail.com> wrote:

why do you postulate an infinite number of worlds? 

As I've said many times before, I don't demand that and neither does Many Worlds, there might be an infinite number or there might only be an astronomical number to an astronomical power of them,  Many Worlds is agnostic on that issue and so am I.  

Are you claiming that there are no inherently probabilistic events, e.g. nuclear decay, and it's just a matter of ignorance?

The short answer is yes. Schrodinger's equation is 100% deterministic and the only assumption  Many Worlds makes is that Schrodinger's equation is correct, therefore if Many Worlds is right then the Multiverse must be 100% deterministic. 


Single world theories also easily explain how "interaction free" measurements work. 

They can tell you how to make a calculation and get the correct answer, but if you are able to form an intuitive mental picture about how interaction free measurement works then you have my SKEPTICAL admiration, you're a better man than I am.  

  John K Clark    See what's on my new list at  Extropolis
msa
 


 

John Clark

unread,
Feb 6, 2025, 7:50:19 AMFeb 6
to everyth...@googlegroups.com
On Wed, Feb 5, 2025 at 8:30 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> The sequence of N  repetitions gives you an estimate of the probability distribution.

>> OK, but even if N=1, such as when I flip a coin just once, I can still obtain a probability of it coming up heads that makes sense.
But Quentin is asserting that if it comes up heads then the probability of heads was 1 and the probability of tails was 0.  Thus completely misapplying the concept of probability.

I don't think so. If AFTER I hit a golf ball 200 yards and we all see that it landed on one particular blade of grass and you ask what was the probability that it landed on that one particular blade the correct answer would be 100% despite the fact that there are millions of other blades of grass that it could've landed on.  

John K Clark    See what's on my new list at  Extropolis
mbg


 

John Clark

unread,
Feb 6, 2025, 3:50:54 PMFeb 6
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 4:29 AM PGC <multipl...@gmail.com> wrote:

Consider halting the defense of many histories and go on offense: What principle selects that singular outcome? What privileges it above all others and by what means/mechanism?

Many Worlds can answer that very easily, there is no such mechanism because there is not a singular outcome. It's the opponents of Many Worlds who cannot give a coherent answer to that very important question.

  John K Clark    See what's on my new list at  Extropolis 
oow


Alan Grayson

unread,
Feb 6, 2025, 4:30:18 PMFeb 6
to Everything List
On Thursday, February 6, 2025 at 5:31:23 AM UTC-7 John Clark wrote:
On Wed, Feb 5, 2025 at 8:21 PM Brent Meeker <meeke...@gmail.com> wrote:

why do you postulate an infinite number of worlds? 

As I've said many times before, I don't demand that and neither does Many Worlds, there might be an infinite number or there might only be an astronomical number to an astronomical power of them,  Many Worlds is agnostic on that issue and so am I.  

IMO, the MWI does assume an infinite number of worlds, though that infinity might be only countable. You can easily infer that from the countably many energy states of the H atom. AG 

John Clark

unread,
Feb 6, 2025, 5:07:50 PMFeb 6
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 4:30 PM Alan Grayson <agrays...@gmail.com> wrote:

>> As I've said many times before, I don't demand that and neither does Many Worlds, there might be an infinite number or there might only be an astronomical number to an astronomical power of them,  Many Worlds is agnostic on that issue and so am I.  

IMO, the MWI does assume an infinite number of worlds, though that infinity might be only countable. You can easily infer that from the countably many energy states of the H atom. AG 

Maybe, maybe not. The energy states of a hydrogen atom are determined by the electromagnetic force, so can it really have an infinite number of energy states? Nobody knows because when a photon gains more energy its wavelength decreases, eventually its wavelength will be the Planck Length, about 10^-20 times smaller than the diameter of a proton, and then you will have so much mass/energy concentrated into such a small space that a tiny black hole is formed, or at least that's what our current ideas say, but there is no experimental confirmation of it so would be wise to be a bit skeptical. And if the photon has more energy than that your guess is as good as mine about what will happen.     

 John K Clark    See what's on my new list at  Extropolis
   q4x

Alan Grayson

unread,
Feb 6, 2025, 6:44:18 PMFeb 6
to Everything List
The Rydberg series for the H-atom is countably infinite. AG

John Clark

unread,
Feb 6, 2025, 8:49:08 PMFeb 6
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 6:44 PM Alan Grayson <agrays...@gmail.com> wrote:

The Rydberg series for the H-atom is countably infinite. AG

The mathematical series may be infinite but that doesn't necessarily mean it's physical counterpart is; in fact when physical theories predict infinities that's usually a sign that they're breaking down. Classical electrodynamics worked fine until things got really really small, because it predicted that atoms would radiate an infinite amount of radiation, and obviously that didn't happen, that's why quantum mechanics was invented. 
   John K Clark    See what's on my new list at  Extropolis

d6c 

Brent Meeker

unread,
Feb 6, 2025, 11:00:00 PMFeb 6
to everyth...@googlegroups.com



On 2/5/2025 11:44 PM, Quentin Anciaux wrote:
Bruce,

You keep insisting that randomness "just is" and that no deeper explanation is possible, but that’s precisely the problem with the single-history view: it reduces probability to a descriptive afterthought with no fundamental meaning. You argue that in a single-history universe, we must simply accept that an event had an 80% chance of happening even if it never does.
You're making up strawmen.  If it never does in many trials, then it's simple Bayesian calculation to tell you it's much more probable your 80% value contains an error than not.  You've obviously never dealt with the actual application of probabilities.

But in what sense was that probability real if only one history ever unfolds and the event never occurs?

You say that probabilities are "real enough" because they are based on prior experience. But prior experience in a single-history universe is just another way of saying, "this is what happened before." That’s not an explanation; it’s circular reasoning.
No, it's the most basic form of prediction from statistics.


You’re using past outcomes to justify probability assignments, but if probability is supposed to describe potential events, then what does it mean when a "possible" event never happens, despite being assigned a nonzero probability?
You keep harping on "it never happens".  Bruce and I have both addressed that.  It you toss a coin twice and "it never happens" to come up tails.  If you toss it a thousand times and it never comes up tails, it's a two-headed coin.


In a multiverse framework, probability describes the relative frequency of events across actualized branches.
You keep using the word "actualized" which according to my dictionary mean "made actual".  But that's false.  The branches are just what, in another context, you characterize as mere abstract mathematices.


It is not just an abstract expectation—it is grounded in the structure of the wavefunction itself.
Which itself is an abstraction, an abstraction that does not even include probabilities.


You dismiss the idea that measure in the wavefunction corresponds to probability, but this is not an assumption—it follows naturally from the mathematics of quantum mechanics. The Born rule is not an extra assumption in MWI; it emerges from decision theory (Deutsch-Wallace), from symmetry arguments (Zurek’s envariance), or from self-locating uncertainty. You keep demanding "proof,"
It's not just me.  People have been searching for a way derive the Born rule for decades and in every case so far the derivation have been found to assume things equivalent to assuming the Born rule.  And that goes for Deutsch and Wallace.  Zurek doesn't even pretend to derive it, he just asserts that it must work such that only the Born rule probabilities apply to stable worlds.   Read the literature.


yet you accept the Born rule as a brute fact in your own framework without any justification beyond "that’s just how quantum mechanics works."
That's why physics is an empirical science.  Some things are derived from observation, not "self evident axioms".


Meanwhile, your single-history approach provides no mechanism for why probabilities match experimental results.
Do you think MWI would be true even it didn't match experimental results??


You claim that probability "just works" without explaining why the realized history should respect the Born distribution at all. You dismiss branch weighting in MWI as unproven, yet you offer no competing explanation for why a single sequence of events should follow probabilistic predictions.

Ultimately, your position amounts to: "Random events happen, probabilities just work, and there’s no deeper reason for anything."
That seems to the case for QM.  It may anger you or disappoint you, but Nature doesn't care about your demands.


That’s not an explanation—it’s an assertion that we shouldn’t ask questions. If you’re satisfied with that, fine, but let’s not pretend it’s a superior foundation for probability. It’s just giving up on understanding why reality follows probabilistic laws in the first place.
I'm fine with asking questions.  I'd be fine with finding way to derive Born's rule without assuming it or something equivalent.  What I'm not fine with is inventing imaginary world's and claiming that's the same thing.

Brent

Brent Meeker

unread,
Feb 6, 2025, 11:06:19 PMFeb 6
to everyth...@googlegroups.com



On 2/6/2025 1:29 AM, PGC wrote:
Quentin, you are perhaps too generous to believe in discussion here. Consider halting the defense of many histories and go on offense: What principle selects that singular outcome? What privileges it above all others and by what means/mechanism? There is no coherent answer without invoking a god, a Brent, an Alan, Cosmin, Trump, Spud, Giulio, it just happens etc. who just proclaim it on some creation day - and it was good. Closet bible worshippers. Apologies, but the types of fallacious argument (ad verecundiam) they rely on, are the same, even if not regarding this exact proposition.
If you want to make argument, have the guts to make it yourself.  You like Quentin apparently have never heard of probabilities and assume every event must have cause.  What principle mandates that.  What privileges every event to have a cause, and not only one cause but a chain of causes going back to the origin of the universe...at which point you must invoke a god.  You apparently fail even to understand you own argument.

Brent

Alan Grayson

unread,
Feb 6, 2025, 11:34:27 PMFeb 6
to Everything List
Predictions of infinite anything can never be verified. But when we discussed the many turns at a T intersection, you seemed to accept the countable worlds that must be realized when applying the MWI. In that case, you could be correct that uncountable might be a stretch when it comes to physics, as compared to pure mathematics. So, it seems to me, you have to grant the existence of countable worlds produced by the Rydberg series in the H-atom in the context of the MWI. AGd6c 

Alan Grayson

unread,
Feb 6, 2025, 11:40:41 PMFeb 6
to Everything List
The Rydberg series in the H-atom refers to an infinite set of states, but their energy content has an upper limit, so I don't see that your comment as relevant. AG  
   q4x

Brent Meeker

unread,
Feb 6, 2025, 11:48:51 PMFeb 6
to everyth...@googlegroups.com
Depends on how the word "was" is meant.  If "was" refers to before you hit the ball, then the probability was not 1.0.  Even after it hit the ground and was rolling it was not 1.  It only "was" 1 if "was" refers to now.

Brent



John K Clark    See what's on my new list at  Extropolis
mbg


 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 7, 2025, 1:32:11 AMFeb 7
to everyth...@googlegroups.com
Brent,

You keep insisting that probability is justified simply because "that’s how physics works," but that doesn’t address the foundational issue. You rely on probability as an empirical tool, yet in a single-history framework, it describes events that were never real in any way. If probability is meant to quantify potentialities, but only one sequence of events ever occurs, then the unrealized possibilities were never actually possible—they were just numbers with no ontological status.

You say that if an event never happens, then we should adjust our prior probabilities. Fine. But in a single-history universe, every event only happens once, so there’s no way to distinguish between a genuine probability assignment and an incorrect one. If an asteroid has an "80% chance" of hitting Earth and doesn’t, was the probability wrong? Or did we just get lucky? In a framework where only one history exists, you can always retroactively claim the probabilities were correct, no matter what happens, because there’s no underlying structure to validate or invalidate them. That makes probability little more than a storytelling device.

You mock MWI’s "imaginary worlds," yet in your framework, probability relies on imaginary possibilities that never had any reality. The difference is that in MWI, probability describes real distributions of outcomes across branches, while in a single-history world, probability is just a tool for reasoning about things that were never going to happen. You claim that probabilities in MWI aren’t "actualized," yet you rely on probabilities that, in a single-history world, have even less connection to reality—they refer to events that never happened and never will.

As for the Born rule, you claim that all derivations assume it outright, but that’s simply false. The Deutsch-Wallace approach derives it from decision theory, Zurek’s envariance provides symmetry-based reasoning, and other work has shown that the Born rule emerges naturally from the structure of the wavefunction. Meanwhile, your framework just assumes it as a brute fact, with no deeper explanation beyond "that’s how physics works." If you’re fine with that level of arbitrariness, that’s on you, but don’t pretend it’s a superior foundation for probability.

Ultimately, your position amounts to saying that probabilities "just work" and that we shouldn’t ask why. That’s fine if you’re satisfied with a purely instrumentalist view, but don’t pretend that’s a deeper understanding. MWI at least attempts to provide a foundation for probability, while the single-history view just asserts that numbers assigned before an event have meaning, even when they describe things that never happened. If that’s not an empty formalism, I don’t know what is.

Quentin 

John Clark

unread,
Feb 7, 2025, 7:14:09 AMFeb 7
to everyth...@googlegroups.com
On Thu, Feb 6, 2025 at 11:48 PM Brent Meeker <meeke...@gmail.com> wrote:

>> If AFTER I hit a golf ball 200 yards and we all see that it landed on one particular blade of grass and you ask what was the probability that it landed on that one particular blade the correct answer would be 100% despite the fact that there are millions of other blades of grass that it could've landed on. 
 
Depends on how the word "was" is meant.  If "was" refers to before you hit the ball, then the probability was not 1.0.  Even after it hit the ground and was rolling it was not 1.  It only "was" 1 if "was" refers to now.

The ambiguity doesn't come from "was" but from another English word "probability", unfortunately the same word is used to describe 4 different things;

The ambiguity doesn't come from "was" but from another English word "probability", unfortunately the same word is used to describe 4 different things:

1) Probability is the ratio of favorable outcomes to all outcomes. 

2) Probability is the long run frequency of an event occurring (which cannot assign a probability to a single event).

3) Probability is the square of the absolute value of a quantum wave function.

4) Probability is a degree of belief which can be updated when more information becomes available

In the above you're using the last meaning of the word, the Bayesian meaning, and it is subjective. Before I open the box the belief I have that I am in the universe where the cat is alive is only at 50%, when I open the box I gain new information and my uncertainty disappears in both universes, in one it climbs to 100% and in the other it drops to 0%.  
  John K Clark    See what's on my new list at  Extropolis
d2z

Brent Meeker

unread,
Feb 7, 2025, 5:28:22 PMFeb 7
to everyth...@googlegroups.com
Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes.  This is really unecessary.  It's just a sop to intuition.  Why not accept that probabilities need not be frequencies?  Did you not read my essay on the subject?  I attach it again.

Brent

Ch1-Probability-Statistics.pdf

John Clark

unread,
Feb 8, 2025, 7:08:15 AMFeb 8
to everyth...@googlegroups.com
On Fri, Feb 7, 2025 at 5:28 PM Brent Meeker <meeke...@gmail.com> wrote:

Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes; it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors; therefore I see no reason why your default condition should be to assume that other prediction is pure nonsense, especially given the fact that it can explain why the quantum world is so weird.

 
This is really unecessary.  It's just a sop to intuition. 

I don't know what you mean by that. If you can find a logical reason to justify your intuition that is not a "sop", it is a profound revelation.  


Why not accept that probabilities need not be frequencies? 

I do because you can't use that approach to assess the probability of a unique event, such as the probability that X will win the next election. The 4 meanings of the word "probability" that I mentioned, the ratio of favorable outcomes to all outcomes, the long run frequency of an event occurring, a degree of belief which can be updated when more information becomes available, and the square of the absolute value of a quantum wave function, are all valid and do not contradict each other; which one you use depends on the circumstances. However if you don't believe in Many Worlds then, although you know from experiment it works, it's very hard to understand why the square of the absolute value of a quantum wave function works and how it can have any sort of relationship with the three other meanings of the word "probability".

And it's always true that no matter how you calculate a probability, if you obtain new information you may have to change that number. Monty Hall asks you to pick a door and there is only one chance in three that you will pick the one that has the prize behind it, but later when Monty opens a door that you did NOT pick and you can see that it did NOT have the prize behind and gives you a chance to change your original pick you do because now that you've obtained new information your chance of getting the price increases from 1/3 to 2/3.


Did you not read my essay on the subject?  

Yes and I couldn't find anything in it that was controversial, I agree with it; but I think you should have at least mentioned the square of the absolute value of a quantum wave function.  especially liked it when you said:  

"So there are (at least) four ways to interpret a probability assignment"

John K Clark    See what's on my new list at  Extropolis
owa 

Bruce Kellett

unread,
Feb 8, 2025, 5:01:26 PMFeb 8
to everyth...@googlegroups.com
On Sat, Feb 8, 2025 at 11:08 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Feb 7, 2025 at 5:28 PM Brent Meeker <meeke...@gmail.com> wrote:

Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes; it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;

And all other interpretations of QM have passed exactly the same tests with equally flying colours. Everett does not have a monopoly on truth.

Bruce

Brent Meeker

unread,
Feb 8, 2025, 6:06:05 PMFeb 8
to everyth...@googlegroups.com



On 2/8/2025 4:07 AM, John Clark wrote:
On Fri, Feb 7, 2025 at 5:28 PM Brent Meeker <meeke...@gmail.com> wrote:

Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes;
Which is a very peculiar way of doing empirical science.  Schroedinger actually had the same problem with QM; he saw that "measurement" was not explained by the evolution of his equation.


it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;
Neglecting the point that all those other worlds have no existence beyond showing up in mathematics as having a probability bigger than zero and less than one.


therefore I see no reason why your default condition should be to assume that other prediction is pure nonsense, especially given the fact that it can explain why the quantum world is so weird.
I don't consider it "pure nonsense".  You're trying to push me into an extreme position.  I consider it an unsolved problem and so I argue against the idea the MWI is a solution.  It isn't because (1) it doesn't actually explain the mechanism of worlds splitting, as evidenced by Sean Carroll's answer to the question whether the splitting is instantaneous across the universe or does it spread out in some way at the speed to light?  He says, "It doesn't matter."  So much for a better explanation.  (2) It doesn't indicate how the Born rule is implemented in the multiple worlds.  Gleason's theorem shows that if the branches are assigned consistent measures, then the measures must satisfy for Born rule when there a three or more branches.  But a straight forward reading of just Schroedingers equation doesn't tell you how probability measures get instantiated


 
This is really unecessary.  It's just a sop to intuition. 

I don't know what you mean by that. If you can find a logical reason to justify your intuition that is not a "sop", it is a profound revelation. 
Why doesn't your intuition just embrace probability and reflect that probability means some things happen and other things don't.  Do you do this where ever probabilities are used?  When you get a poker hand, do imagine all possible poker hands were dealt in other worlds?



Why not accept that probabilities need not be frequencies? 

I do because you can't use that approach to assess the probability of a unique event, such as the probability that X will win the next election. The 4 meanings of the word "probability" that I mentioned, the ratio of favorable outcomes to all outcomes, the long run frequency of an event occurring, a degree of belief which can be updated when more information becomes available, and the square of the absolute value of a quantum wave function, are all valid and do not contradict each other; which one you use depends on the circumstances. However if you don't believe in Many Worlds then, although you know from experiment it works, it's very hard to understand why the square of the absolute value of a quantum wave function works and how it can have any sort of relationship with the three other meanings of the word "probability".

And it's always true that no matter how you calculate a probability, if you obtain new information you may have to change that number. Monty Hall asks you to pick a door and there is only one chance in three that you will pick the one that has the prize behind it, but later when Monty opens a door that you did NOT pick and you can see that it did NOT have the prize behind and gives you a chance to change your original pick you do because now that you've obtained new information your chance of getting the price increases from 1/3 to 2/3.

Did you not read my essay on the subject?  

Yes and I couldn't find anything in it that was controversial, I agree with it; but I think you should have at least mentioned the square of the absolute value of a quantum wave function.  especially liked it when you said:  

"So there are (at least) four ways to interpret a probability assignment"
Yes I'm thinking of adding a discussion of quantum probability to emphasize that not every probability is based on ignorance.  I originally wrote it for a class I was teaching to engineers involved in reliability prediction and testing.

Brent


John K Clark    See what's on my new list at  Extropolis
owa 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 9, 2025, 9:38:18 AMFeb 9
to everyth...@googlegroups.com
On Sat, Feb 8, 2025 at 6:06 PM Brent Meeker <meeke...@gmail.com> wrote:
>>>Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

>> Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes;
Which is a very peculiar way of doing empirical science. 

That's not peculiar for empirical science at all. We can't detect virtual particles and we will never be able to, but physicists believe they exist because they can explain how the Casimir Effect works and why the electron has the magnetic moment that it has. And it's not just in quantum mechanics. We don't know if the entire universe is finite or infinite, or if it's open or closed, but either way we do know that the entire universe must be MUCH larger than the observable universe.

If the universe is open then it has negative curvature, and thus it must be infinite because there cannot be an finite space with uniform negative curvature without introducing boundaries and or singularities. 

For a closed universe with a curvature of 0.4% (if it was larger than that we would've already detected it and we haven't), the radius of curvature would need to be AT LEAST 160 times larger than the observable universe's radius, which is 46.5 billion light years; 160 × 46.5 billion= a radius of 7.4 trillion light years , and the corresponding minimum volume of the entire universe would be 25,600 times the volume of the observable universe. And there's more. Although we can see galaxies that are now 46.5 billion light years away, if they are further away than 17 billion light years (corresponding to a time when the universe was about 500 million years old) and we aimed a beam of light at it, that light would NEVER reach the galaxy because relative to us space would be expanding faster than the speed of light.  
 
Schroedinger actually had the same problem with QM; he saw that "measurement" was not explained by the evolution of his equation.

"Measurement" is not explained by Schrodinger's equation IF you assume that everything follows that equation EXCEPT for a thing called "the observer" which for some unknown reason obeys only classical physics. 

>> it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;

Neglecting the point that all those other worlds have no existence beyond showing up in mathematics as having a probability bigger than zero and less than one.

 Paul Dirac thought the negative solutions that showed up in his equation had no existence beyond showing up in his mathematics, but he was wrong, it indicated the existence of antimatter. Dirac is later quoted as saying that his equation was smarter than he was.

 >> I see no reason why your default condition should be to assume that other prediction is pure nonsense, especially given the fact that it can explain why the quantum world is so weird.

> I don't consider it "pure nonsense". 

OK, I'm very glad to hear that! 

it doesn't actually explain the mechanism of worlds splitting, as evidenced by Sean Carroll's answer to the question whether the splitting is instantaneous across the universe or does it spread out in some way at the speed to light?  He says, "It doesn't matter."  So much for a better explanation. 

 Carroll is saying two things by that:

1) It's impossible even in theory to ever determine the answer to that question. 

2) The answer to that question is not important, that is to say it makes no observable difference, and it's not even clear that the question makes sense. 

I believe both points are valid. Contrary to what some say, Einstein didn't prove the Luminiferous Aether didn't exist, he proved it wasn't important.   
 
 It doesn't indicate how the Born rule is implemented in the multiple worlds. 

If there are multiple worlds then, until you open the box, you don't have enough information to be certain if you're in the world where the cat is alive or in the world where the cat is dead, so you would have to resort to probability; and if you're using Schrödinger's equation the Born Rule is the only way to make sure the number you get is between zero and one and all the probabilities add up to exactly one.  


Why doesn't your intuition just embrace probability and reflect that probability means some things happen and other things don't. 

Because Schrodinger's equation is deterministic so "the atom just happens to decay" is an insufficient explanation. And because if X and Y react with each other and then the result of that reaction reacts with Z, I get one end result if I observe the X and Y reaction and something completely different if I don't observe it. Give me an intuitive explanation of how that could be without using Many Worlds. And then give me an intuitive explanation of how interaction free measurement could work without using Many Worlds. 

 
When you get a poker hand, do imagine all possible poker hands were dealt in other worlds?

I could but in that particular case there are vastly simpler computational means I could use to obtain a useful probability. The situation would be very different if instead of cards you gave me a sealed box and I had to bet if there was a live or dead cat in it.  
 
not every probability is based on ignorance. 

I think at the deepest level every probability is based on ignorance because I think Many Worlds is correct and all that Many Worlds is saying is that Schrodinger's equation means what it says, and Schrodinger's equation is 100% deterministic. If I always knew what world I was in I would know if the cat was alive or dead before I opened the box and I wouldn't need to resort to probability for anything. 

 John K Clark    See what's on my new list at  Extropolis

bpd

John Clark

unread,
Feb 9, 2025, 2:05:01 PMFeb 9
to everyth...@googlegroups.com
On Sat, Feb 8, 2025 at 5:01 PM Bruce Kellett <bhkel...@gmail.com> wrote:

>> Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes; it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;

And all other interpretations of QM have passed exactly the same tests with equally flying colours. Everett does not have a monopoly on truth.

But Everett was the only one who did NOT say that although it works amazingly well on everything else, for some unknown reason Schrodinger's Equation stops working the minute you start talking about something called "the observer". 

Alan Grayson

unread,
Feb 9, 2025, 2:28:16 PMFeb 9
to Everything List
On Sunday, February 9, 2025 at 7:38:18 AM UTC-7 John Clark wrote:
On Sat, Feb 8, 2025 at 6:06 PM Brent Meeker <meeke...@gmail.com> wrote:
>>>Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

>> Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes;
Which is a very peculiar way of doing empirical science. 

That's not peculiar for empirical science at all. We can't detect virtual particles and we will never be able to, but physicists believe they exist because they can explain how the Casimir Effect works and why the electron has the magnetic moment that it has. And it's not just in quantum mechanics. We don't know if the entire universe is finite or infinite, or if it's open or closed, but either way we do know that the entire universe must be MUCH larger than the observable universe.

If the universe is open then it has negative curvature, and thus it must be infinite because there cannot be an finite space with uniform negative curvature without introducing boundaries and or singularities. 

For a closed universe with a curvature of 0.4% (if it was larger than that we would've already detected it and we haven't), the radius of curvature would need to be AT LEAST 160 times larger than the observable universe's radius, which is 46.5 billion light years; 160 × 46.5 billion= a radius of 7.4 trillion light years , and the corresponding minimum volume of the entire universe would be 25,600 times the volume of the observable universe. And there's more. Although we can see galaxies that are now 46.5 billion light years away, if they are further away than 17 billion light years (corresponding to a time when the universe was about 500 million years old) and we aimed a beam of light at it, that light would NEVER reach the galaxy because relative to us space would be expanding faster than the speed of light.  
 
Schroedinger actually had the same problem with QM; he saw that "measurement" was not explained by the evolution of his equation.

"Measurement" is not explained by Schrodinger's equation IF you assume that everything follows that equation EXCEPT for a thing called "the observer" which for some unknown reason obeys only classical physics. 

>> it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;

Neglecting the point that all those other worlds have no existence beyond showing up in mathematics as having a probability bigger than zero and less than one.

 Paul Dirac thought the negative solutions that showed up in his equation had no existence beyond showing up in his mathematics, but he was wrong, it indicated the existence of antimatter. Dirac is later quoted as saying that his equation was smarter than he was.

 >> I see no reason why your default condition should be to assume that other prediction is pure nonsense, especially given the fact that it can explain why the quantum world is so weird.

> I don't consider it "pure nonsense". 

OK, I'm very glad to hear that! 

it doesn't actually explain the mechanism of worlds splitting, as evidenced by Sean Carroll's answer to the question whether the splitting is instantaneous across the universe or does it spread out in some way at the speed to light?  He says, "It doesn't matter."  So much for a better explanation. 

 Carroll is saying two things by that:

1) It's impossible even in theory to ever determine the answer to that question. 

2) The answer to that question is not important, that is to say it makes no observable difference, and it's not even clear that the question makes sense. 

Easily understandable. Carroll has given up on physics and now plays the clown. AG 

Russell Standish

unread,
Feb 9, 2025, 4:49:12 PMFeb 9
to everyth...@googlegroups.com
On Thu, Feb 06, 2025 at 11:38:52AM +1100, Bruce Kellett wrote:
>
> Many worlds theory does not have any comparable way of relating probabilities
> to the properties of the wave function. In fact, if all possibilities are
> realized on every trial, the majority of observers will get results that
> contradict the Born probabilities.
>

I'm not sure what you mean by "contradict", but the majority of
observers will get results that lie within one standard deviation of
the expected value (ie mean) according to the distribution of Born
probabilities. If this is what you mean by "contradict", then you are
trivially correct, but uninteresting. If you mean the above statement
is false according to the MWI, then I'd like to know why. It sure
doesn't seem so to me.


--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

Brent Meeker

unread,
Feb 9, 2025, 4:57:08 PMFeb 9
to everyth...@googlegroups.com



On 2/9/2025 6:37 AM, John Clark wrote:
On Sat, Feb 8, 2025 at 6:06 PM Brent Meeker <meeke...@gmail.com> wrote:
>>>Thus arbitrarily imposing a frequentist model on the world by imagining an ensemble of universes. 

>> Hugh Everett wasn't imagining, he was just taking seriously a prediction that Schrodinger's Equation makes;
Which is a very peculiar way of doing empirical science. 

That's not peculiar for empirical science at all. We can't detect virtual particles and we will never be able to, but physicists believe they exist because they can explain how the Casimir Effect works and why the electron has the magnetic moment that it has. And it's not just in quantum mechanics.
Virtual particles are just a mathematical tool to form infinite sums in a consistent way.  No physicist believes they exist.


We don't know if the entire universe is finite or infinite, or if it's open or closed, but either way we do know that the entire universe must be MUCH larger than the observable universe.
So what?  That doesn't mean supposing what is out beyond is the money needed to balance your bank account.


If the universe is open then it has negative curvature, and thus it must be infinite because there cannot be an finite space with uniform negative curvature without introducing boundaries and or singularities. 

For a closed universe with a curvature of 0.4% (if it was larger than that we would've already detected it and we haven't), the radius of curvature would need to be AT LEAST 160 times larger than the observable universe's radius, which is 46.5 billion light years; 160 × 46.5 billion= a radius of 7.4 trillion light years , and the corresponding minimum volume of the entire universe would be 25,600 times the volume of the observable universe. And there's more. Although we can see galaxies that are now 46.5 billion light years away, if they are further away than 17 billion light years (corresponding to a time when the universe was about 500 million years old) and we aimed a beam of light at it, that light would NEVER reach the galaxy because relative to us space would be expanding faster than the speed of light.  
 
Schroedinger actually had the same problem with QM; he saw that "measurement" was not explained by the evolution of his equation.

"Measurement" is not explained by Schrodinger's equation IF you assume that everything follows that equation EXCEPT for a thing called "the observer" which for some unknown reason obeys only classical physics. 

>> it's true that particular prediction can't be tested, but many other predictions that the equation makes can be and they've all passed with flying colors;

Neglecting the point that all those other worlds have no existence beyond showing up in mathematics as having a probability bigger than zero and less than one.

 Paul Dirac thought the negative solutions that showed up in his equation had no existence beyond showing up in his mathematics, but he was wrong, it indicated the existence of antimatter. Dirac is later quoted as saying that his equation was smarter than he was.

 >> I see no reason why your default condition should be to assume that other prediction is pure nonsense, especially given the fact that it can explain why the quantum world is so weird.

> I don't consider it "pure nonsense". 

OK, I'm very glad to hear that! 

it doesn't actually explain the mechanism of worlds splitting, as evidenced by Sean Carroll's answer to the question whether the splitting is instantaneous across the universe or does it spread out in some way at the speed to light?  He says, "It doesn't matter."  So much for a better explanation. 

 Carroll is saying two things by that:

1) It's impossible even in theory to ever determine the answer to that question. 

2) The answer to that question is not important, that is to say it makes no observable difference, and it's not even clear that the question makes sense. 

I believe both points are valid. Contrary to what some say, Einstein didn't prove the Luminiferous Aether didn't exist, he proved it wasn't important.   
 
 It doesn't indicate how the Born rule is implemented in the multiple worlds. 

If there are multiple worlds then, until you open the box, you don't have enough information to be certain if you're in the world where the cat is alive or in the world where the cat is dead, so you would have to resort to probability; and if you're using Schrödinger's equation the Born Rule is the only way to make sure the number you get is between zero and one and all the probabilities add up to exactly one.  


Why doesn't your intuition just embrace probability and reflect that probability means some things happen and other things don't. 

Because Schrodinger's equation is deterministic so "the atom just happens to decay" is an insufficient explanation. 
But all MWI does is push the insufficiency off to "you just happen to be in the world where the atom decayed at 3:10pm"

And because if X and Y react with each other and then the result of that reaction reacts with Z, I get one end result if I observe the X and Y reaction and something completely different if I don't observe it. Give me an intuitive explanation of how that could be without using Many Worlds. And then give me an intuitive explanation of how interaction free measurement could work without using Many Worlds. 

 
When you get a poker hand, do imagine all possible poker hands were dealt in other worlds?

I could but in that particular case there are vastly simpler computational means I could use to obtain a useful probability. The situation would be very different if instead of cards you gave me a sealed box and I had to bet if there was a live or dead cat in it.  
 
not every probability is based on ignorance. 

I think at the deepest level every probability is based on ignorance because I think Many Worlds is correct and all that Many Worlds is saying is that Schrodinger's equation means what it says, and Schrodinger's equation is 100% deterministic. If I always knew what world I was in I would know if the cat was alive or dead before I opened the box and I wouldn't need to resort to probability for anything.

I think you think MWI is correct simply because you don't know of any alternatives.  You have a cartoonish idea of Copenhagen and think of it as the only alternative.  A lot of other physicists, like me, think MWI is no better than Copenhagen.  It just pushes the problem off to more obscure questions, like how does the orthgonality of worlds spread?  And why isn't Zeh's Darwinian decoherence enough?  Here's a few papers which discuss single-world solutions to the measurement problem.  Don't bother to read them though, they'll just perturb your certainty.

Collapse Miscellany
Philip Pearle
An introduction to the CSL (Continuous Spontaneous Localization) theory of dynamical wave function collapse is provided, including a derivation of CSL from two postulates. There follows applications to a free particle, or to a `small' rigid cluster of free particles, in a single wave-packet and in interfering packets.
https://arxiv.org/abs/1209.5082v2

Quantum Mechanics Without State Vectors
Steven Weinberg
It is proposed to give up the description of physical states in terms of ensembles of state vectors with various probabilities, relying instead solely on the density matrix as the description of reality. With this definition of a physical state, even in entangled states nothing that is done in one isolated system can instantaneously effect the physical state of a distant isolated system. This change in the description of physical states opens up a large variety of new ways that the density matrix may transform under various symmetries, different from the unitary transformations of ordinary quantum mechanics. Such new transformation properties have been explored before, but so far only for the symmetry of time translations into the future, treated as a semi-group. Here new transformation properties are studied for general symmetry transformations forming groups, rather than semi-groups. Arguments are given that such symmetries should act on the density matrix as in ordinary quantum mechanics, but loopholes are found for all of these arguments.
arXiv:1405.3483v1

A Synopsis of the Minimal Modal Interpretation of Quantum Theory
Authors: Jacob A. Barandes, David Kagan
Abstract: We summarize a new realist interpretation of quantum theory that builds on the existing physical structure of the theory and allows experiments to have definite outcomes, but leaves the theory's basic dynamical content essentially intact. Much as classical systems have specific states that evolve along definite trajectories through configuration spaces, the traditional formulation of quantum theory asserts that closed quantum systems have specific states that evolve unitarily along definite trajectories through Hilbert spaces, and our interpretation extends this intuitive picture of states and Hilbert-space trajectories to the case of open quantum systems as well. Our interpretation---which we claim is ultimately compatible with Lorentz invariance---reformulates wave-function collapse in terms of an underlying interpolating dynamics, makes it possible to derive the Born rule from deeper principles, and resolves several open questions regarding ontological stability and dynamics
arXiv:1405.6754

Measurement and Quantum Dynamics in the Minimal Modal Interpretation of Quantum Theory
Authors: Jacob A. Barandes, David Kagan
Abstract: Any realist interpretation of quantum theory must grapple with the measurement problem and the status of state-vector collapse. In a no-collapse approach, measurement is typically modeled as a dynamical process involving decoherence. We describe how the minimal modal interpretation closes a gap in this dynamical description, leading to a complete and consistent resolution to the measurement problem and an effective form of state collapse. Our interpretation also provides insight into the indivisible nature of measurement--the fact that you can't stop a measurement part-way through and uncover the underlying `ontic' dynamics of the system in question. Having discussed the hidden dynamics of a system's ontic state during measurement, we turn to more general forms of open-system dynamics and explore the extent to which the details of the underlying ontic behavior of a system can be described. We construct a space of ontic trajectories and describe obstructions to defining a probability measure on this space.
arXiv:1807.07136

I've listed only papers that are easily available on the arXiv.  There are others in older papers and books.

Brent

Bruce Kellett

unread,
Feb 9, 2025, 5:26:10 PMFeb 9
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 8:49 AM Russell Standish <li...@hpcoders.com.au> wrote:
On Thu, Feb 06, 2025 at 11:38:52AM +1100, Bruce Kellett wrote:
>
> Many worlds theory does not have any comparable way of relating probabilities
> to the properties of the wave function. In fact, if all possibilities are
> realized on every trial, the majority of observers will get results that
> contradict the Born probabilities.
>

I'm not sure what you mean by "contradict", but the majority of
observers will get results that lie within one standard deviation of
the expected value (ie mean) according to the distribution of Born
probabilities. If this is what you mean by "contradict", then you are
trivially correct, but uninteresting. If you mean the above statement
is false according to the MWI, then I'd like to know why. It sure
doesn't seem so to me.

It does depend on what value you take for N, the number of trials. In the limit of very large N, the law of large numbers does give the result you suggest. But for intermediate values of N, MWI says that there will always be branches for which the ratio of successes to N falls outside any reasonable error bound on the expected Born value.

This problem has been noted by others, and when asked about it, Carroll simply dismissed the poor suckers that get results that invalidate the Born Rule as just poor unlucky suckers. Sure, in a single world system, there is always a small probability that you will get anomalous results. But that is always a small probability. Whereas, in MWI, there are always such branches with anomalous results, even for large N. The difference is important.

The other point is that the set of branches obtained in Everettian many worlds is independent of the amplitudes, or the Born probabilities for each outcome, so observations on any one branch cannot be used as evidence, either for or against the theory.

See the articles by Adrian Kent and David Albert in "Many Worlds: Everett, Quantum Theory, and Reality"(OUP, 2010) Edited by Saunders, Barrett, Kent, and Wallace.

Bruce

Russell Standish

unread,
Feb 9, 2025, 5:51:26 PMFeb 9
to everyth...@googlegroups.com
Yes, but the proportion of "poor unlucky suckers" in the set of all
observers becomes vanishingly small as the number of observers tend to
infinity.

As JC says, we don't know if the number of observers is countably
infinite (which would be my guess), uncountably infinite or just plain
astronomically large. In any case, the proportion of observers seeing
results outside of one standard deviation is of measure zero for
practical purposes. If that is not the case, please explain.



> The other point is that the set of branches obtained in Everettian many worlds
> is independent of the amplitudes, or the Born probabilities for each outcome,
> so observations on any one branch cannot be used as evidence, either for or
> against the theory.
>

We've had this discussion before. They're not independent, because the
preparation of the experiment that defines the Born probabilities
filters the set of allowed branches from which we sample the
measurements.

> See the articles by Adrian Kent and David Albert in "Many Worlds: Everett,
> Quantum Theory, and Reality"(OUP, 2010) Edited by Saunders, Barrett, Kent, and
> Wallace.
>

I've already got a copy of Kent's paper in my reading stack. Albert's paper
appears to be behind a paywall, alas :(.

In any case, it'll be a while before I get to the paper - just
wondering if you had a 2 minute explanation of the argument. What I've
heard so far on this list hasn't been particularly convincing.

Bruce Kellett

unread,
Feb 9, 2025, 5:52:25 PMFeb 9
to everyth...@googlegroups.com
I forgot to add that Kent's paper is also available on arxiv: arxiv.org/abs/0905.0624


Bruce

Bruce Kellett

unread,
Feb 9, 2025, 6:06:28 PMFeb 9
to everyth...@googlegroups.com
The number of trials does not have to tend to infinity. That is just the frequentist mistake.


As JC says, we don't know if the number of observers is countably
infinite (which would be my guess), uncountably infinite or just plain
astronomically large. In any case, the proportion of observers seeing
results outside of one standard deviation is of measure zero for
practical purposes. If that is not the case, please explain.

The number of anomalous results in MWI is not of measure zero in any realistic case.

> The other point is that the set of branches obtained in Everettian many worlds
> is independent of the amplitudes, or the Born probabilities for each outcome,
> so observations on any one branch cannot be used as evidence, either for or
> against the theory.
>

We've had this discussion before. They're not independent, because the
preparation of the experiment that defines the Born probabilities
filters the set of allowed branches from which we sample the
measurements.

I don't know what this means.

> See the articles by Adrian Kent and David Albert in "Many Worlds: Everett,
> Quantum Theory, and Reality"(OUP, 2010) Edited by Saunders, Barrett, Kent, and
> Wallace.
>

I've already got a copy of Kent's paper in my reading stack. Albert's paper
appears to be behind a paywall, alas :(.

In any case, it'll be a while before I get to the paper - just
wondering if you had a 2 minute explanation of the argument. What I've
heard so far on this list hasn't been particularly convincing.

A quote from Kent (p. 326 of the book)
" After N trials, the multiverse contains 2^N branches, corresponding to all N possible binary string outcomes. The inhabitants on a string with pN zero and (1-p)N one outcomes will, with a degree of confidence that tends towards one as N gets large, tend to conclude that the weight p is attached to zero outcomes branches and weight (1-p) is attached to one outcome branches. In other words, everyone, no matter what outcome string they see, tends towards complete confidence in the belief that the relative frequencies they observe represent the weights."

Bruce

Russell Standish

unread,
Feb 9, 2025, 6:52:31 PMFeb 9
to everyth...@googlegroups.com
I wasn't talking about the number of trials, but the number of
observers. That is either astronomically large or an actual infinity.

>
>
> As JC says, we don't know if the number of observers is countably
> infinite (which would be my guess), uncountably infinite or just plain
> astronomically large. In any case, the proportion of observers seeing
> results outside of one standard deviation is of measure zero for
> practical purposes. If that is not the case, please explain.
>
>
> The number of anomalous results in MWI is not of measure zero in any realistic
> case.
>

I'm trying to see why you say that.

>
> > The other point is that the set of branches obtained in Everettian many
> worlds
> > is independent of the amplitudes, or the Born probabilities for each
> outcome,
> > so observations on any one branch cannot be used as evidence, either for
> or
> > against the theory.
> >
>
> We've had this discussion before. They're not independent, because the
> preparation of the experiment that defines the Born probabilities
> filters the set of allowed branches from which we sample the
> measurements.
>
>
> I don't know what this means.
>

In preparing the experiment, you are already filtering out the
observers who choose to observe something different. And that
definitely changes the set of worlds, or branches under
consideration. So you cannot say (as you did) "the set of branches
obtained in Everettian many worlds is independent of the
amplitudes". Whether the set of branches changes in precisely the way
to recover the Born rule is a different question, of course, and
obviously rather hard to prove.

>
> > See the articles by Adrian Kent and David Albert in "Many Worlds:
> Everett,
> > Quantum Theory, and Reality"(OUP, 2010) Edited by Saunders, Barrett,
> Kent, and
> > Wallace.
> >
>
> I've already got a copy of Kent's paper in my reading stack. Albert's paper
> appears to be behind a paywall, alas :(.
>
> In any case, it'll be a while before I get to the paper - just
> wondering if you had a 2 minute explanation of the argument. What I've
> heard so far on this list hasn't been particularly convincing.
>
>
> A quote from Kent (p. 326 of the book)
> " After N trials, the multiverse contains 2^N branches, corresponding to all N
> possible binary string outcomes. The inhabitants on a string with pN zero and
> (1-p)N one outcomes will, with a degree of confidence that tends towards one as
> N gets large, tend to conclude that the weight p is attached to zero outcomes
> branches and weight (1-p) is attached to one outcome branches. In other words,
> everyone, no matter what outcome string they see, tends towards complete
> confidence in the belief that the relative frequencies they observe represent
> the weights."

That is true. And the observers observing something like an all zero
sequence, or alternating 1s and 0s, are living in what we called a
"wabbity universe" some years ago on this list. Those observers become
vanishingly small as N→∞ in the space of all observers.

I'm still not convinced there is a problem here...

Bruce Kellett

unread,
Feb 9, 2025, 7:35:25 PMFeb 9
to everyth...@googlegroups.com
There are only ever 2^N branches under consideration, so only ever 2^N observers.
Additional branches due to decoherence can be disregarded for these purposes since
all such observers only duplicate some that have already been counted.

>     As JC says, we don't know if the number of observers is countably
>     infinite (which would be my guess), uncountably infinite or just plain
>     astronomically large. In any case, the proportion of observers seeing
>     results outside of one standard deviation is of measure zero for
>     practical purposes. If that is not the case, please explain.
>
>
> The number of anomalous results in MWI is not of measure zero in any realistic
> case.
>

I'm trying to see why you say that.

Read Kent.

>     > The other point is that the set of branches obtained in Everettian many  worlds
>     > is independent of the amplitudes, or the Born probabilities for each outcome,
>     > so observations on any one branch cannot be used as evidence, either for  or
>     > against the theory.
>     >
>
>     We've had this discussion before. They're not independent, because the
>     preparation of the experiment that defines the Born probabilities
>     filters the set of allowed branches from which we sample the
>     measurements.
>
>
> I don't know what this means.
>

In preparing the experiment, you are already filtering out the
observers who choose to observe something different. And that
definitely changes the set of worlds, or branches under
consideration. So you cannot say (as you did) "the set of branches
obtained in Everettian many worlds is independent of the
amplitudes". Whether the set of branches changes in precisely the way
to recover the Born rule is a different question, of course, and
obviously rather hard to prove.

There is no such selective state preparation. Nobody gets filtered out in this way. You are just making things up.


>     > See the articles by Adrian Kent and David Albert in "Many Worlds:  Everett,
>     > Quantum Theory, and Reality"(OUP, 2010) Edited by Saunders, Barrett,  Kent, and
>     > Wallace.
>     >
>
>     I've already got a copy of Kent's paper in my reading stack. Albert's paper
>     appears to be behind a paywall, alas :(.
>
>     In any case, it'll be a while before I get to the paper - just
>     wondering if you had a 2 minute explanation of the argument. What I've
>     heard so far on this list hasn't been particularly convincing.
>
>
> A quote from Kent (p. 326 of the book)
> " After N trials, the multiverse contains 2^N branches, corresponding to all N
> possible binary string outcomes. The inhabitants on a string with pN zero and
> (1-p)N one outcomes will, with a degree of confidence that tends towards one as
> N gets large, tend to conclude that the weight p is attached to zero outcomes
> branches and weight (1-p) is attached to one outcome branches. In other words,
> everyone, no matter what outcome string they see, tends towards complete
> confidence in the belief that the relative frequencies they observe represent
> the weights."

That is true. And the observers observing something like an all zero
sequence, or alternating 1s and 0s, are living in what we called a
"wabbity universe" some years ago on this list. Those observers become
vanishingly small as N→∞ in the space of all observers.

I'm still not convinced there is a problem here...

We don't have to take N to infinity to get a problem. The trouble is that since the set of branches obtained is the same for all values of p (in Ken't's example),
and the dominant ratio of zeros to ones is 50/50 in every case as N becomes large, it is always the case that the majority of observers get results that disagree with the Born rule. In most cases, observers find that QM is disconfirmed.

Bruce

Russell Standish

unread,
Feb 9, 2025, 9:47:43 PMFeb 9
to everyth...@googlegroups.com
These 2^N observers will be unevenly weighted. If you want to do even
weighting by some sort of symmetry argument, you will need to
subdivide more finely.

> Additional branches due to decoherence can be disregarded for these purposes
> since
> all such observers only duplicate some that have already been counted.
>
>
> >     As JC says, we don't know if the number of observers is countably
> >     infinite (which would be my guess), uncountably infinite or just
> plain
> >     astronomically large. In any case, the proportion of observers seeing
> >     results outside of one standard deviation is of measure zero for
> >     practical purposes. If that is not the case, please explain.
> >
> >
> > The number of anomalous results in MWI is not of measure zero in any
> realistic
> > case.
> >
>
> I'm trying to see why you say that.
>
>
> Read Kent.

Sure - I will get to it eventually. And probably have my own issues with that.

In the meantime, you will need to excuse my incredulity in your statements.

>
>
> >     > The other point is that the set of branches obtained in Everettian
> many  worlds
> >     > is independent of the amplitudes, or the Born probabilities for
> each outcome,
> >     > so observations on any one branch cannot be used as evidence,
> either for  or
> >     > against the theory.
> >     >
> >
> >     We've had this discussion before. They're not independent, because
> the
> >     preparation of the experiment that defines the Born probabilities
> >     filters the set of allowed branches from which we sample the
> >     measurements.
> >
> >
> > I don't know what this means.
> >
>
> In preparing the experiment, you are already filtering out the
> observers who choose to observe something different. And that
> definitely changes the set of worlds, or branches under
> consideration. So you cannot say (as you did) "the set of branches
> obtained in Everettian many worlds is independent of the
> amplitudes". Whether the set of branches changes in precisely the way
> to recover the Born rule is a different question, of course, and
> obviously rather hard to prove.
>
>
> There is no such selective state preparation. Nobody gets filtered out in this
> way. You are just making things up.
>

The way you brought it up earlier was to assume two SG apparatuses
where observers were free to rotate one of the apparatus by angle θ
with respect to the other. You inappropriately applied an indifference
principle to assign a uniform weight to all branches.

Yes - the decision to set the apparatus at angle θ filters the set of
observers to those who make the same choice. And that filtering makes
a difference in this case.

I'll read Kent's paper to see if he makes the same error.

Bruce Kellett

unread,
Feb 9, 2025, 10:14:22 PMFeb 9
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 1:47 PM Russell Standish <li...@hpcoders.com.au> wrote:
On Mon, Feb 10, 2025 at 11:35:11AM +1100, Bruce Kellett wrote:
> On Mon, Feb 10, 2025 at 10:52 AM Russell Standish <li...@hpcoders.com.au>
> wrote:
>
>    
>     >     On Mon, Feb 10, 2025 at 09:25:57AM +1100, Bruce Kellett wrote:
>     >     > On Mon, Feb 10, 2025 at 8:49 AM Russell Standish

And where do these weights come from? If you do the construction, all 2^N branches arise in the same way, so they have the same weight. No indifference principle need be applied. But in any case, whatever weights you might imagine these sequences of binary outcomes to have, the results are the same. For large N, the vast majority of observers will find a 50/50 split of zeros and ones, (within a standard deviation). So regardless of the "weights", most will disagree with the Born result, which depends on the coefficients in the original wave function:

       |psi> = a|0> + b|1>.
As I have said. I did not apply the indifference principle because the sequences would have equal weight simply by construction. Besides, the "weights" assigned to these sequences are essentially irrelevant.

Yes - the decision to set the apparatus at angle θ filters the set of
observers to those who make the same choice. And that filtering makes
a difference in this case.

What utter nonsense. I used the case of a beam of spin-half particles polarized along the x-axis so that I could then observe these with a rotatable S-G magnet in order to get a range of values for the amplitudes a and b in the wave function |psi> above. There is no filtering. You have just not understood what is going on.

I'll read Kent's paper to see if he makes the same error.

Kent does not use this particular example, although he does point out that the "weights" assigned to each sequence in the 2^N possible sequences are irrelevant.

Bruce

Russell Standish

unread,
Feb 9, 2025, 10:41:24 PMFeb 9
to everyth...@googlegroups.com
That is another question entirely.

> If you do the construction, all 2^N
> branches arise in the same way, so they have the same weight.

This is where I disagree with you. The only way of assigning equal
weight is if there is some fundamental system symmetry that allows the
indifference principle to be applied. But by construction, that
symmetry is broken.

>
> As I have said. I did not apply the indifference principle because the
> sequences would have equal weight simply by construction. Besides, the
> "weights" assigned to these sequences are essentially irrelevant.

I disagree.

Bruce Kellett

unread,
Feb 9, 2025, 10:50:22 PMFeb 9
to everyth...@googlegroups.com
Then you will never understand the argument. The weights, wherever they come from, are largely irrelevant to the argument, which depends only on the observed proportions of zeros and ones in each sequence. That observed proportion can be used by the observer to estimate the value of the probability of a zero (p, in the above example). Since most sequences have approximately a 50/50 split, most observers will estimate p = 0.5, regardless of the initial coefficients a and b. Hence, MWI cannot reproduce the quantum mechanical results.

Bruce

Quentin Anciaux

unread,
Feb 10, 2025, 5:41:11 AMFeb 10
to everyth...@googlegroups.com
Bruce,

Yes, every possible experience is lived by some version of me in MWI, but that does not mean all experiences are equally likely or subjectively equivalent. The measure of a branch determines how many copies of me experience a given outcome. In practice, my conscious experience will overwhelmingly be shaped by the branches with higher measure, not by the rare and improbable ones.

For example, if a quantum event has a 1% probability, then there will be branches where I observe it, but they will be exponentially fewer than those where I do not. The measure is not just an abstract number—it reflects the relative weight of different outcomes in the wavefunction. This is why, as an observer, I will almost always see frequencies matching the Born rule, because the majority of my copies exist in branches where this distribution holds.

Your argument assumes that since all branches exist, they must be equiprobable, but this ignores the fact that measure determines how many copies of an observer exist in each branch. In a lottery, every ticket exists, but some are printed in larger quantities. Saying "all branches exist, so they must be equal" is as flawed as saying "all lottery tickets exist, so all should win equally."

Ultimately, my conscious experience is not determined by the mere existence of branches, but by the relative number of copies of me in each. Low-measure branches do exist, but they are not representative of my experience. This is why MWI naturally leads to the Born probabilities, without assuming collapse or introducing an arbitrary rule.

Your reasoning collapses probability into mere branch-counting, but probability is about where observers actually find themselves, not about an abstract collection of sequences.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 10, 2025, 6:45:23 AMFeb 10
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 9:41 PM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Yes, every possible experience is lived by some version of me in MWI, but that does not mean all experiences are equally likely or subjectively equivalent. The measure of a branch determines how many copies of me experience a given outcome. In practice, my conscious experience will overwhelmingly be shaped by the branches with higher measure, not by the rare and improbable ones.

You cannot prove this. It is pure speculation.

For example, if a quantum event has a 1% probability, then there will be branches where I observe it, but they will be exponentially fewer than those where I do not. The measure is not just an abstract number—it reflects the relative weight of different outcomes in the wavefunction. This is why, as an observer, I will almost always see frequencies matching the Born rule, because the majority of my copies exist in branches where this distribution holds.

No they don't.

Your argument assumes that since all branches exist, they must be equiprobable, but this ignores the fact that measure determines how many copies of an observer exist in each branch. In a lottery, every ticket exists, but some are printed in larger quantities. Saying "all branches exist, so they must be equal" is as flawed as saying "all lottery tickets exist, so all should win equally."

Ultimately, my conscious experience is not determined by the mere existence of branches, but by the relative number of copies of me in each. Low-measure branches do exist, but they are not representative of my experience. This is why MWI naturally leads to the Born probabilities, without assuming collapse or introducing an arbitrary rule.

Your reasoning collapses probability into mere branch-counting, but probability is about where observers actually find themselves, not about an abstract collection of sequences.

Like Russell, you have not even begun to understand the argument I am making. It has nothing to do with weights or the number of observers on each branch.

Let me recast the argument. We have a binary wave function: |psi> = a|0> + b|1>. For convenience I have taken a spin-half system, or photon polarizations. Then we can use a = cos(theta) and b=sin(theta) so that a^ +b^2 = 1 is easily maintained and it is simple to rotate things to alter the magnitudes of the coefficients.

Now we run N trials of measuring this system at some angle. Since the basic MWI principle is that every possibility is realized on every trial, we get 2^N sequences of results, covering all possible binary sequences of length N. Note particularly that we get exactly the same set of sequences for any angle theta. (We must, because there are only 2^N possible sequences.)

The procedure is now to estimate the probability coefficient of the original wave function from our measured sequence (which is simply one of the 2^N). We do this by counting the number  of zeros and/or ones in the sequence. Then p = n_zero/N  The weight of the sequence, whatever it is, does not enter into this calculation of the probability, which is why I can reasonably take all sequences to have the same weight (although I do not do this, and it is not necessary).

The point of this exercise is that the probability estimate that I get (p), is unlikely to be the Born probability which is a^2. As N becomes large, the law of large numbers implies that a large majority of the sequences will have approximately equal numbers of zeros and ones (independently of the coefficients a and b.). Consequently, the estimated probability will be 0.5 in nearly every case. This is only the Born probability for a set of angles of measure zero, so the majority of experimenters are going to find results that do not conform to the Born rule, and thus find that QM is disconfirmed. This follows directly from the requirement that every result be found on every trial ,which is an essential feature of MWI, so MWI is disconfirmed -- it is not a viable interpretation of QM.

Bruce

Quentin Anciaux

unread,
Feb 10, 2025, 7:09:45 AMFeb 10
to everyth...@googlegroups.com
Bruce,

Your argument assumes that all measurement sequences are equally likely, which is false in MWI. The issue is not about which sequences exist (they all do) but about how measure is distributed among them. The Born rule does not emerge from simple branch counting—it emerges from the relative measure assigned to each branch.

You claim that in large N trials, most sequences will have an equal number of zeros and ones, implying that the estimated probability will tend toward 0.5. But this ignores that the wavefunction does not generate sequences with uniform measure. The amplitude of each sequence is determined by the product of individual amplitudes along the sequence, and when you apply the Born rule iteratively, high-measure sequences dominate the observer’s experience.

Your mistake is treating measurement as though every sequence has equal likelihood, which contradicts the actual evolution of the wavefunction. Yes, there are 2^N branches, but those branches do not carry equal measure. The vast majority of measure is concentrated in the sequences that match the Born distribution, meaning that nearly all observers find themselves in worlds where outcomes obey the expected frequencies.

This is not speculation; it follows directly from the structure of the wavefunction. The weight of a branch is not just a number—it represents the relative frequency with which observers find themselves in different sequences. The fact that a branch exists does not mean it has equal relevance to an observer's experience.

Your logic would apply if MWI simply stated that all sequences exist and are equally likely. But that is not what MWI says. It says that the measure of a branch determines the number of observer instances that experience that branch. The overwhelming majority of those instances will observe the Born rule, not because of "branch counting," but because high-measure sequences contain exponentially more copies of any given observer.

If your argument were correct, QM would be falsified every time we ran an experiment, because we would never observe Born-rule statistics. Yet every experiment confirms the Born rule, which means your assumption that "all sequences contribute equally" is demonstrably false. You are ignoring that measure, not count, determines what observers experience.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 10, 2025, 10:23:39 AMFeb 10
to everyth...@googlegroups.com
On Sun, Feb 9, 2025 at 4:57 PM Brent Meeker <meeke...@gmail.com> wrote:

>> That's not peculiar for empirical science at all. We can't detect virtual particles and we will never be able to, but physicists believe they exist because they can explain how the Casimir Effect works and why the electron has the magnetic moment that it has. And it's not just in quantum mechanics.

Virtual particles are just a mathematical tool to form infinite sums in a consistent way.  No physicist believes they exist.
 
I don't believe that's true. Virtual particles have just as much existence as quarks do even though nobody has ever seen a free quark and nobody ever will if the standard model of particle physics is correct, and nobody has ever seen the Higgs Boson and nobody ever will because the Higgs boson has a half-life of only 10⁻²² seconds, but particle physicist believe it's real for the same reason they believe that quarks and virtual particles are real, because of the effects it has on things that we can see; in the Higgs case its effect is the decay particles that we can see because their half-lives are about a trillion times longer than the Higgs.

>> We don't know if the entire universe is finite or infinite, or if it's open or closed, but either way we do know that the entire universe must be MUCH larger than the observable universe.

>So what? 

If you believe virtual particles do not exist do you also believe that stars that are beyond our observational horizon don't exist and are just a mathematical tool that enables cosmological models that use General Relativity to remain logically consistent?  If you do believe that then you disagree with every cosmologist on the planet.  

>> Schrodinger's equation is deterministic so "the atom just happens to decay" is an insufficient explanation. 

But all MWI does is push the insufficiency off to "you just happen to be in the world where the atom decayed at 3:10pm"

MWI doesn't push off anything! If Schrodinger's equation is correct, and that's all that MWI is saying, then somebody must be in the world where the atom decays at 3:10 PM. By the way, strictly speaking Many Worlds is not an interpretation it is a theory that can be proven wrong; right now experiments are underway in an attempt to detect the objective collapse of the quantum wave function, if they find it then the Many Worlds idea is just wrong, it has no wiggle room. 


>> I think at the deepest level every probability is based on ignorance because I think Many Worlds is correct and all that Many Worlds is saying is that Schrodinger's equation means what it says, and Schrodinger's equation is 100% deterministic. If I always knew what world I was in I would know if the cat was alive or dead before I opened the box and I wouldn't need to resort to probability for anything.

I think you think MWI is correct simply because you don't know of any alternatives. 

Guilty as charged. I've said more than once that Many Worlds is the least bad explanation for quantum weirdness that I know of, if somebody comes up with something better I'll drop it like a hot potato.  


A lot of other physicists, like me, think MWI is no better than Copenhagen.  It just pushes the problem off to more obscure questions, like how does the orthgonality of worlds spread?

The worlds are orthogonal because they do not interfere with each other and it is spread because of decoherence. Unless extreme measures are taken, such as cooling them down to close to absolute zero, when two particles become entangled very soon environmental noise randomizes the phases between the worlds so that interference effects between them is destroyed.

A “measurement” results in the branching of the universal state into non-interfering “worlds,” each corresponding to a different outcome. Decoherence causes the branches to become independent, which is why observers in each branch see one definite outcome and why they never see a cat that is half dead half alive, some observers see a cat that is 100% dead and other observers see a cat that is 100% alive. 

And why isn't Zeh's Darwinian decoherence enough? 

Quantum Darwinism and Many Worlds are compatible, in fact Quantum Darwinism it explains why an observer in one of those worlds experiences classical reality most of the time and only sees quantum weirdness when difficult sophisticated experiments are performed.

According to Zeh (and later by Zurek) when any two objects interact they have the same quantum state so they could be fully described by the same quantum wave function, this is what entanglement is; it's very difficult to keep those two particles isolated. and when they become entangled with the outside environment that is called "decoherence" and is the point where quantum weirdness disappears.

In Everett's idea a measurement causes a branching of the universal wave function in the worlds that do not interfere with each other, so each observer in those worlds observes one definite outcome. And no collapse postulate is required 


   John K Clark    See what's on my new list at  Extropolis
ncp
 

Brent Meeker

unread,
Feb 10, 2025, 2:12:37 PMFeb 10
to everyth...@googlegroups.com
There are ways MWI can be saved.  For example Julian Barbour's idea that a single macroscopic world consists of an enormous number of parallel worlds that are microscopically distinct, and a measurement divides this stream of microscopic worlds into macroscopically distinct worlds.  Then the division can reflect instantiating uneven probabilities.  There's a paper by Pearle which I cited in reply to JC which puts some mathematics on a similar idea. 

But is certainly not "just the Schroedinger equation".  It's interesting to think how the Born rule may be realized.  Barandes, Weinberg, and Pearle have ideas worked out in different degrees. Generally they begin by rejecting the Hilbert space picture and adopting the density matrix as fundamental, recognizing that that a real state is never completely isolated.

Brent

Brent Meeker

unread,
Feb 10, 2025, 2:17:44 PMFeb 10
to everyth...@googlegroups.com



On 2/10/2025 7:22 AM, John Clark wrote:
On Sun, Feb 9, 2025 at 4:57 PM Brent Meeker <meeke...@gmail.com> wrote:

I think you think MWI is correct simply because you don't know of any alternatives. 

Guilty as charged. I've said more than once that Many Worlds is the least bad explanation for quantum weirdness that I know of, if somebody comes up with something better I'll drop it like a hot potato. 
So did you read any of the papers I cited?

Brent

Russell Standish

unread,
Feb 10, 2025, 4:05:15 PMFeb 10
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 11:12:36AM -0800, Brent Meeker wrote:
>
> There are ways MWI can be saved.  For example Julian Barbour's idea that a
> single macroscopic world consists of an enormous number of parallel worlds that
> are microscopically distinct, and a measurement divides this stream of
> microscopic worlds into macroscopically distinct worlds.  Then the division can
> reflect instantiating uneven probabilities.  There's a paper by Pearle which I
> cited in reply to JC which puts some mathematics on a similar idea. 

I think it _has_ to be something like this. Division into microscopic
worlds can be done arbitrarily, of course, since they're
indistinguishable wrt observers, however to get a meaningful measure
by branch counting, it has to be done is such a way that a symmetry
relation is preserved, and then the indifference principle
applied. Just like a classical 6 sided die is modelled as symmetric
with respect to each face, so we can assign a probability of 1/6 to
each outcome.

My comment to Bruce is that he is inappropriately applying the
indifference principle in his setup. In order to recover the necessary
symmetry, it is necessary to include observers who rotate their
apparatus -θ as well as θ. And once you do that, the probability of 0
being observed by anyone from the union of the two sets of observers
is 1/2. For every observer in the θ set seeing a give sequence x, there is
an observer in the -θ set that sees the complementary sequence.

I don't know whether Kent's setup suffers the same flaw, but I will
look into it when I get a spare moment. (Or a round tuit).

>
> But is certainly not "just the Schroedinger equation".  It's interesting to
> think how the Born rule may be realized.  Barandes, Weinberg, and Pearle have
> ideas worked out in different degrees. Generally they begin by rejecting the
> Hilbert space picture and adopting the density matrix as fundamental,
> recognizing that that a real state is never completely isolated.
>

Actually getting the Born rule from the MWI is obviously non-trivial,
and possibly impossible, given the large number of attempts by bright
people that have failed. However, it does seem that once you assume a
complex measure over the system states, the Born rule arises quite
naturally - it is, after all, the only bilinear relationship between
two states that is normalised to unity.

John Clark

unread,
Feb 10, 2025, 4:40:35 PMFeb 10
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 2:17 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> I think you think MWI is correct simply because you don't know of any alternatives. 

>>Guilty as charged. I've said more than once that Many Worlds is the least bad explanation for quantum weirdness that I know of, if somebody comes up with something better I'll drop it like a hot potato. 
So did you read any of the papers I cited?

Yes I did but I'm still a Many Worlds fan. Two of the papers you cited were written by the same people and were talking about Objective Collapse, as I've said before, if Many Worlds turns out to be untrue I'd switch over to that view.  Another paper was also talking about Objective Collapse but modified things a bit so that the collapse was a continuous evolution rather than a sudden jump, that's nice but doesn't change things fundamentally. 

And the Steven Weinberg paper suggests that we could stop using Schrodinger's wave equation entirely and switch over to something that was mathematically equivalent, something that was similar to Werner Heisenberg's matrix approach which strictly insisted that only observable quantities be used. Both methods provided the right answer but Schrodinger's approach proved much more popular because it was easier to use and gave a better (although still not very good) intuitive understanding about what was going on. Can you honestly say that after reading Weinberg's paper thanks to those matrices you are able to form a better mental image about what's happening at the most fundamental level of the very weird quantum world? I sure can't! 

John K Clark    See what's on my new list at  Extropolis
sib




Brent Meeker

unread,
Feb 10, 2025, 5:10:12 PMFeb 10
to everyth...@googlegroups.com
Both Weinberg's and Pearle's papers suggest that the density matrix should be regarded as fundamental instead of a Hilbert space vector.  That's not at all the same as Heisenberg's matrix mechanics, which is strictly equivalent to Schrodinger's equation.  Pearle's paper, which is the more worked out of the two, is not concerned only with observation but also assumes unobserved "collapse" of the wave function. I put collapse in scare quotes because Pearle provides a mathematical mechanism that achieves the collapse probabilistically but smoothly.  I find it easier to visualize than innumerable, expanding, overlapping spheres of space containing UP and DWN results in equal number but carrying along "weights" to give them different probabilities.  Barandes paper doesn't eliminate multiple-worlds but just reduces them to an optional way looking at the problem, which he compares to a choice of gauge.

Brent


John K Clark    See what's on my new list at  Extropolis
sib




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 10, 2025, 5:21:54 PMFeb 10
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 11:09 PM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Your argument assumes that all measurement sequences are equally likely, which is false in MWI. The issue is not about which sequences exist (they all do) but about how measure is distributed among them. The Born rule does not emerge from simple branch counting—it emerges from the relative measure assigned to each branch.

You really are obsessed with the idea that I am assuming that all measurement sequences are equally likely. It does not matter how many times I deny this, and point out how my argument does not depend in any way on such an assumption, you keep insisting that that is my error. I think you should pay more attention to what I am saying and not so much to your own prejudices.

Each of the binary sequences that result from N trials of measurements on a 2-component system will exist independently of the original amplitudes. For example, the sequence with r zeros, and (N - r) ones, will have a coefficient a^r b^(N-r). You are interpreting this as a weight or probability without any evidence for such an interpretation. If you impose the Born rule, it is the Born probability of that sequence. But we have not imposed the Born rule, so as far as I am concerned it is just a number. And this number is the same for that sequence whenever it occurs. The point is that I simply count the zeros (and/or ones) in each sequence. This gives an estimate of the probability of getting a zero (or one in that sequence). That estimate is p = r/N. Now that probability estimate is the same for every occurrence of that sequence. In particular, the probability estimate is independent of the Born probability from the initial state, which is simply a^2.

The problem here is that we get all possible values of the probability estimate p = r/N from the set of 2^ binary sequences that arise from every set of N trials. This should give rise to concern, because only very few of these probability estimates are going to agree with the Born probability a^2. You cannot, at this stage, use the amplitudes of each sequence to downweight anomalous results because the Born rule is not available to you from the Schrodinger equation.

The problem is multiplied when you consider that the amplitudes in the original state |psi> = a|0> + b|1> are arbitrary, so the true Born probabilities can take on any value between 0 and 1. This arbitrariness is not reflected in the set of 2^N binary sequences that you obtain in any experiment with N trials because you get the same set for any value of the original amplitudes


You claim that in large N trials, most sequences will have an equal number of zeros and ones, implying that the estimated probability will tend toward 0.5. But this ignores that the wavefunction does not generate sequences with uniform measure. The amplitude of each sequence is determined by the product of individual amplitudes along the sequence, and when you apply the Born rule iteratively, high-measure sequences dominate the observer’s experience.

Your mistake is treating measurement as though every sequence has equal likelihood, which contradicts the actual evolution of the wavefunction. Yes, there are 2^N branches, but those branches do not carry equal measure. The vast majority of measure is concentrated in the sequences that match the Born distribution, meaning that nearly all observers find themselves in worlds where outcomes obey the expected frequencies.

This is not speculation; it follows directly from the structure of the wavefunction. The weight of a branch is not just a number—it represents the relative frequency with which observers find themselves in different sequences. The fact that a branch exists does not mean it has equal relevance to an observer's experience.

Your logic would apply if MWI simply stated that all sequences exist and are equally likely. But that is not what MWI says. It says that the measure of a branch determines the number of observer instances that experience that branch. The overwhelming majority of those instances will observe the Born rule, not because of "branch counting," but because high-measure sequences contain exponentially more copies of any given observer.

If your argument were correct, QM would be falsified every time we ran an experiment, because we would never observe Born-rule statistics.

That is the point I am making. MWI is disconfirmed by every experiment. QM remains intact, it is your many worlds interpretation that fails.


Yet every experiment confirms the Born rule, which means your assumption that "all sequences contribute equally" is demonstrably false.

Since I do not make that assumption, your conclusion is wrong.

You are ignoring that measure, not count, determines what observers experience.

When you do an experiment measuring the spin projection of some 2-component state, all that you record is a sequence of zeros and ones, with r zeros and (N - r) ones. You do not ever see the amplitude of that sequence. It has no effect on what you measure, so claiming that it can up- or down-weight your results is absurd.

Bruce

Brent Meeker

unread,
Feb 10, 2025, 5:23:33 PMFeb 10
to everyth...@googlegroups.com



On 2/10/2025 4:09 AM, Quentin Anciaux wrote:
Bruce,

Your argument assumes that all measurement sequences are equally likely, which is false in MWI. The issue is not about which sequences exist (they all do) but about how measure is distributed among them. The Born rule does not emerge from simple branch counting—it emerges from the relative measure assigned to each branch.

You claim that in large N trials, most sequences will have an equal number of zeros and ones, implying that the estimated probability will tend toward 0.5. But this ignores that the wavefunction does not generate sequences with uniform measure. The amplitude of each sequence is determined by the product of individual amplitudes along the sequence, and when you apply the Born rule iteratively, high-measure sequences dominate the observer’s experience.
But there is no amplitude in the result.  The amplitudes are predicted values.  There is not process whereby your measurement is UP and there's a 0.3 probability tag attached to it.  It's just UP, and ex hypothesi there's another branch where it's just DWN. 

Your mistake is treating measurement as though every sequence has equal likelihood, which contradicts the actual evolution of the wavefunction. Yes, there are 2^N branches, but those branches do not carry equal measure.
The point is they have no way to carry any measure at all.  That's something in the prediction which empirically is satisfied by there being unequal numbers to UP and DWN...NOT by some "weight" they carry.  But the UPs and DWNs are necessarily equal in MWI because everything possible happens.  The distribution is always binomial with p=0.5.


The vast majority of measure is concentrated in the sequences that match the Born distribution, meaning that nearly all observers find themselves in worlds where outcomes obey the expected frequencies.

This is not speculation; it follows directly from the structure of the wavefunction. The weight of a branch is not just a number—it represents the relative frequency with which observers find themselves in different sequences. The fact that a branch exists does not mean it has equal relevance to an observer's experience.

Your logic would apply if MWI simply stated that all sequences exist and are equally likely. But that is not what MWI says. It says that the measure of a branch determines the number of observer instances that experience that branch. The overwhelming majority of those instances will observe the Born rule, not because of "branch counting," but because high-measure sequences contain exponentially more copies of any given observer.

If your argument were correct, QM would be falsified every time we ran an experiment, because we would never observe Born-rule statistics.
No it is MWI that is falsified because if MWI is applied consistently, without sneaking in "weights" the distribution would always be binomial with p=0.5.  If you can't understand that, then you don't understand your own argument.

Brent

Quentin Anciaux

unread,
Feb 10, 2025, 5:32:58 PMFeb 10
to everyth...@googlegroups.com
Your argument is based on treating the measurement process as merely counting sequences of zeros and ones, while dismissing the amplitudes as “just numbers.” But this ignores that the wavefunction governs the evolution of the system, and the amplitudes are not arbitrary labels—they encode the structure of reality. The Schrodinger equation evolves the system deterministically, and when measurement occurs, the measure of each branch determines how many observer instances find themselves in it.

You claim that the amplitude of a sequence does not affect what is measured, yet this is exactly what determines how many observers experience a given sequence. The claim that “you do not ever see the amplitude” misses the point: you do not directly observe measure, but you observe its consequences. The reason we see Born-rule statistics is that the measure dictates the relative number of observers experiencing different sequences.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 10, 2025, 5:45:42 PMFeb 10
to everyth...@googlegroups.com
Your argument suggests that since measure is not directly observable, it cannot influence what we experience. But this is incorrect for the same reason that probability distributions matter in classical systems:

You cannot observe probability itself, only its consequences over many trials.

In a biased coin flip (90% heads, 10% tails), every sequence of flips exists in MWI.

But most copies of an observer will find themselves in sequences where heads appear 90% of the time.

The fact that all sequences exist does not mean they contribute equally to an observer’s experience.


You experience the weight of measure indirectly—just like in classical probability, where we experience frequencies that match probabilities over many trials.

Bruce Kellett

unread,
Feb 10, 2025, 6:36:07 PMFeb 10
to everyth...@googlegroups.com
On Tue, Feb 11, 2025 at 9:32 AM Quentin Anciaux <allc...@gmail.com> wrote:

Your argument is based on treating the measurement process as merely counting sequences of zeros and ones, while dismissing the amplitudes as “just numbers.” But this ignores that the wavefunction governs the evolution of the system, and the amplitudes are not arbitrary labels—they encode the structure of reality. The Schrodinger equation evolves the system deterministically, and when measurement occurs, the measure of each branch determines how many observer instances find themselves in it.

The Schrodinger equation is completely insensitive to the amplitudes. They are just carried along as inert parameters. It is the interpretation according to the Born rule that makes sense of this structure. But the Born rule, and probability interpretations per se, are not in the Schrodinger equation.

You claim that the amplitude of a sequence does not affect what is measured, yet this is exactly what determines how many observers experience a given sequence.

Where on earth did you get that incredible idea -- that the number of observers depends on the amplitudes?

According to MWI there is a branch for every possible value, and the observer splits along with the branching, so there is an observer on every branch. After N trials of the binary case, there are 2^N branches, with an observer (copy of the original experimenter) on every branch. These all exist equally, so your idea of weighting the branches according to the amplitudes makes no sense: there can be no "degrees of existence". All the observers exist equally, so all are equally entitled to count zeros to get an estimate of the underlying probability.

The claim that “you do not ever see the amplitude” misses the point: you do not directly observe measure, but you observe its consequences. The reason we see Born-rule statistics is that the measure dictates the relative number of observers experiencing different sequences.

That is assuming that there can be "degrees of existence" such that observers on Born-anomalous branches do not exist as strongly as those who see the correct statistics. This is not an idea that is in the Schrodinger equation, it is not in the mathematics, it is just plain silly. The amplitudes do not give 'degrees of existence" nor do they give different relative numbers of observers for each sequence. The mathematics of the Schrodinger equation are clear, and they do not support any such ideas.

Bruce

Brent Meeker

unread,
Feb 10, 2025, 6:52:53 PMFeb 10
to everyth...@googlegroups.com



On 2/10/2025 2:32 PM, Quentin Anciaux wrote:


Le lun. 10 févr. 2025, 23:21, Bruce Kellett <bhkel...@gmail.com> a écrit :
On Mon, Feb 10, 2025 at 11:09 PM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Your argument assumes that all measurement sequences are equally likely, which is false in MWI. The issue is not about which sequences exist (they all do) but about how measure is distributed among them. The Born rule does not emerge from simple branch counting—it emerges from the relative measure assigned to each branch.

You really are obsessed with the idea that I am assuming that all measurement sequences are equally likely. It does not matter how many times I deny this, and point out how my argument does not depend in any way on such an assumption, you keep insisting that that is my error. I think you should pay more attention to what I am saying and not so much to your own prejudices.

Each of the binary sequences that result from N trials of measurements on a 2-component system will exist independently of the original amplitudes. For example, the sequence with r zeros, and (N - r) ones, will have a coefficient a^r b^(N-r). You are interpreting this as a weight or probability without any evidence for such an interpretation. If you impose the Born rule, it is the Born probability of that sequence. But we have not imposed the Born rule, so as far as I am concerned it is just a number. And this number is the same for that sequence whenever it occurs. The point is that I simply count the zeros (and/or ones) in each sequence. This gives an estimate of the probability of getting a zero (or one in that sequence). That estimate is p = r/N. Now that probability estimate is the same for every occurrence of that sequence. In particular, the probability estimate is independent of the Born probability from the initial state, which is simply a^2.

The problem here is that we get all possible values of the probability estimate p = r/N from the set of 2^ binary sequences that arise from every set of N trials. This should give rise to concern, because only very few of these probability estimates are going to agree with the Born probability a^2. You cannot, at this stage, use the amplitudes of each sequence to downweight anomalous results because the Born rule is not available to you from the Schrodinger equation.

The problem is multiplied when you consider that the amplitudes in the original state |psi> = a|0> + b|1> are arbitrary, so the true Born probabilities can take on any value between 0 and 1. This arbitrariness is not reflected in the set of 2^N binary sequences that you obtain in any experiment with N trials because you get the same set for any value of the original amplitudes


You claim that in large N trials, most sequences will have an equal number of zeros and ones, implying that the estimated probability will tend toward 0.5. But this ignores that the wavefunction does not generate sequences with uniform measure. The amplitude of each sequence is determined by the product of individual amplitudes along the sequence, and when you apply the Born rule iteratively, high-measure sequences dominate the observer’s experience.

Your mistake is treating measurement as though every sequence has equal likelihood, which contradicts the actual evolution of the wavefunction. Yes, there are 2^N branches, but those branches do not carry equal measure. The vast majority of measure is concentrated in the sequences that match the Born distribution, meaning that nearly all observers find themselves in worlds where outcomes obey the expected frequencies.

This is not speculation; it follows directly from the structure of the wavefunction. The weight of a branch is not just a number—it represents the relative frequency with which observers find themselves in different sequences. The fact that a branch exists does not mean it has equal relevance to an observer's experience.

Your logic would apply if MWI simply stated that all sequences exist and are equally likely. But that is not what MWI says. It says that the measure of a branch determines the number of observer instances that experience that branch. The overwhelming majority of those instances will observe the Born rule, not because of "branch counting," but because high-measure sequences contain exponentially more copies of any given observer.

If your argument were correct, QM would be falsified every time we ran an experiment, because we would never observe Born-rule statistics.

That is the point I am making. MWI is disconfirmed by every experiment. QM remains intact, it is your many worlds interpretation that fails.


Yet every experiment confirms the Born rule, which means your assumption that "all sequences contribute equally" is demonstrably false.

Since I do not make that assumption, your conclusion is wrong.

You are ignoring that measure, not count, determines what observers experience.

When you do an experiment measuring the spin projection of some 2-component state, all that you record is a sequence of zeros and ones, with r zeros and (N - r) ones. You do not ever see the amplitude of that sequence. It has no effect on what you measure, so claiming that it can up- or down-weight your results is absurd.

Bruce

Your argument is based on treating the measurement process as merely counting sequences of zeros and ones, while dismissing the amplitudes as “just numbers.”
No amplitudes show up in the sequence of zeros and ones.  You are implicitly assuming the Born rule attaches to those sequences of 0 and 1, but it doesn't without a separate axiom saying so.


But this ignores that the wavefunction governs the evolution of the system, and the amplitudes are not arbitrary labels—they encode the structure of reality.
But in MWI  at every repetition of the experiment all the possible results occur.  And they don't have any weights attached to them.


The Schrodinger equation evolves the system deterministically, and when measurement occurs, the measure of each branch determines how many observer instances find themselves in it.
Now you're assuming branch counting instead of weights.  But the same objection applies.  The Schroedinger equation doesn't not create different numbers of branches.  MWI assumes every possible outcome occurs once per experiment.  To get some different branching structure you need the Born rule or some equivalent assumption (like Barbour's).



You claim that the amplitude of a sequence does not affect what is measured, yet this is exactly what determines how many observers experience a given sequence. The claim that “you do not ever see the amplitude” misses the point: you do not directly observe measure, but you observe its consequences. The reason we see Born-rule statistics is that the measure dictates the relative number of observers experiencing different sequences.
How does the measure appear in the multiple worlds?  If you're going to have every possibility occur, how in each world, where the only observation is 1 or 0, does the measure occur?

Brent

Brent Meeker

unread,
Feb 10, 2025, 6:59:11 PMFeb 10
to everyth...@googlegroups.com



On 2/10/2025 2:45 PM, Quentin Anciaux wrote:
Your argument suggests that since measure is not directly observable, it cannot influence what we experience. But this is incorrect for the same reason that probability distributions matter in classical systems:

You cannot observe probability itself, only its consequences over many trials.
Which is why in classical probability it is important that things happen or don't happen.  It you assumed everything happened then you would have the same problem MWI has, and you would have to adopt somethiing like the Born rule to explain what "probability" means.




In a biased coin flip (90% heads, 10% tails), every sequence of flips exists in MWI.

But most copies of an observer will find themselves in sequences where heads appear 90% of the time.

The fact that all sequences exist does not mean they contribute equally to an observer’s experience.
But what does explain their contributions?...the Born rule, which doesn't not comport with all possibilities occur.

Brent

Quentin Anciaux

unread,
Feb 11, 2025, 2:29:49 AMFeb 11
to everyth...@googlegroups.com
Bruce,

Your argument assumes that all branches are equally weighted in terms of observer experience, which contradicts what we actually see in quantum experiments. The claim that the Schrödinger equation is "insensitive" to amplitudes is incorrect. The amplitudes evolve deterministically under the Schrödinger equation and define the measure associated with each outcome. The Born rule does not need to be "inserted" into MWI—it emerges naturally if one considers measure as determining how many observer instances experience each outcome.

Your claim that there are exactly 2^N observers after N trials and that each one "counts equally" ignores what measure represents. The fundamental point is that not all branches contribute equally to what an observer experiences. Yes, an observer exists on every branch, but that does not mean they exist in equal numbers.

In standard probability theory, an event occurring in more instances is simply more likely to be observed. Similarly, in MWI:

A branch with a higher amplitude means there are exponentially more copies of the observer experiencing that outcome.

This is not "assigning degrees of existence"—it is stating that measure determines how many versions of an observer find themselves in a given sequence.

You can call this "silly," but it's the only way MWI remains consistent with experiments. If each observer counted equally across all branches, we would expect uniform probabilities, contradicting the Born rule.

The Schrödinger equation is not "insensitive" to amplitudes; it governs their evolution. The amplitudes define how much of the total wavefunction exists in each outcome. Saying that amplitudes are "inert" is like saying that in classical probability, event frequencies are "inert" because the probability distribution does not dynamically change per trial.

The fact that amplitudes don’t directly affect local observations does not mean they are irrelevant. You do not need to "see" probability distributions to experience their effects. In classical cases, you observe probabilities through frequency distributions—not because you see an abstract probability function floating in space.

Similarly, in MWI, you experience the effects of amplitude-based measure because the majority of your copies exist in branches that follow the Born rule.

Your argument frames measure as a metaphysical claim about "degrees of existence," but that’s a strawman. Measure is not about some observers being "more real" than others—it’s about how many instances of a given observer exist in different branches.

Imagine a lottery where some numbers are printed millions of times and others are printed once. Saying "each ticket exists, so all are equal" ignores the fact that you are overwhelmingly more likely to pick a ticket that was printed millions of times.

This is exactly what happens in MWI:

Yes, every sequence of outcomes exists.

But observers overwhelmingly find themselves in high-measure sequences because there are simply more instances of them there.

If your claim were correct, quantum mechanics would fail to match experiment, because the observed frequencies would not match the Born rule. Since that never happens, the conclusion is clear: measure, not naive branch counting, determines what observers experience.

Yes, there is currently no clear cut theories to recover the born rule from Schrödinger equation alone, doesn’t mean there aren't. 

Also I'm not an advocate of MWI per se, I prefer information theory approach from which we should be able to recover MW like theories and a measure (maybe mixing some UD and speed prior)

Quentin

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruce Kellett

unread,
Feb 11, 2025, 5:47:47 AMFeb 11
to everyth...@googlegroups.com
On Tue, Feb 11, 2025 at 6:29 PM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

Your argument assumes that all branches are equally weighted in terms of observer experience, which contradicts what we actually see in quantum experiments. The claim that the Schrödinger equation is "insensitive" to amplitudes is incorrect. The amplitudes evolve deterministically under the Schrödinger equation and define the measure associated with each outcome. The Born rule does not need to be "inserted" into MWI—it emerges naturally if one considers measure as determining how many observer instances experience each outcome.

Your claim that there are exactly 2^N observers after N trials and that each one "counts equally" ignores what measure represents. The fundamental point is that not all branches contribute equally to what an observer experiences. Yes, an observer exists on every branch, but that does not mean they exist in equal numbers.

In standard probability theory, an event occurring in more instances is simply more likely to be observed. Similarly, in MWI:

A branch with a higher amplitude means there are exponentially more copies of the observer experiencing that outcome.

Your argument might be more convincing if any of this actually followed from the Schrodinger equation.


This is not "assigning degrees of existence"—it is stating that measure determines how many versions of an observer find themselves in a given sequence.

You can call this "silly," but it's the only way MWI remains consistent with experiments. If each observer counted equally across all branches, we would expect uniform probabilities, contradicting the Born rule.

The fact is that MWI is not consistent with experiment since the different sequences in repeated trials on similarly prepared systems fail to give frequencies in accordance with the Born rule.



 
The Schrödinger equation is not "insensitive" to amplitudes; it governs their evolution. The amplitudes define how much of the total wavefunction exists in each outcome. Saying that amplitudes are "inert" is like saying that in classical probability, event frequencies are "inert" because the probability distribution does not dynamically change per trial.

Nonsense.

The fact that amplitudes don’t directly affect local observations does not mean they are irrelevant. You do not need to "see" probability distributions to experience their effects. In classical cases, you observe probabilities through frequency distributions—not because you see an abstract probability function floating in space.

No, you observe probabilities because some things happen while others don't.


Similarly, in MWI, you experience the effects of amplitude-based measure because the majority of your copies exist in branches that follow the Born rule.

There is no mathematical justification for such a proposition.


Your argument frames measure as a metaphysical claim about "degrees of existence," but that’s a strawman. Measure is not about some observers being "more real" than others—it’s about how many instances of a given observer exist in different branches.

That is how you have used the concept, since there is only one copy of you as the observer on each branch -- that is what the Schrodinger equation says. All else is fantasy.


Imagine a lottery where some numbers are printed millions of times and others are printed once. Saying "each ticket exists, so all are equal" ignores the fact that you are overwhelmingly more likely to pick a ticket that was printed millions of times.

This is exactly what happens in MWI:

Yes, every sequence of outcomes exists.

But observers overwhelmingly find themselves in high-measure sequences because there are simply more instances of them there.

That is not the case. If it were, then you could get that result from the Schrodinger equation. But you can't.


If your claim were correct, quantum mechanics would fail to match experiment, because the observed frequencies would not match the Born rule. Since that never happens, the conclusion is clear: measure, not naive branch counting, determines what observers experience.

MWI does not match experiments, because it cannot get the Born rule. You cannot even consistently impose the Born rule on Evettian theory.

Yes, there is currently no clear cut theories to recover the born rule from Schrödinger equation alone, doesn’t mean there aren't. 

I see, you are just not clever enough to see how it all works!


Also I'm not an advocate of MWI per se, I prefer information theory approach from which we should be able to recover MW like theories and a measure (maybe mixing some UD and speed prior)

Since you are not clever enough to see how it all works, it might be better if you stopped laying down the law to those who can see more clearly.

Bruce

Quentin Anciaux

unread,
Feb 11, 2025, 6:04:55 AMFeb 11
to everyth...@googlegroups.com
Well as I see it, we can't discuss on this list without violence, since you seems to see yourself as a hugh end hyper intelligent human being, I'll stop right here. Stay with your kind. Bye

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 11, 2025, 7:13:47 AMFeb 11
to everyth...@googlegroups.com
On Mon, Feb 10, 2025 at 6:36 PM Bruce Kellett <bhkel...@gmail.com> wrote:

The Schrodinger equation is completely insensitive to the amplitudes. They are just carried along as inert parameters.

An equation is a mathematical entity, so as far as physics is concerned ALL the parameters in one are inert until a physicist assigns a physical meaning to one; this is true even for an equation as simple as F=ma. Mathematics in general and equations in particular are useful because they increase our ability to think logically. 

the Born rule, and probability interpretations per se, are not in the Schrodinger equation.

A mathematical equation by itself cannot confer a physical meaning or interpretation on anything, not even Schrodinger's equation, it's just a bunch of symbols that can be transformed into other symbols by following certain strictly defined rules that can all be derived by a few simple self evident logical axioms. An intelligent being is in the meaning conferring business, not the symbols in an equation. 

According to MWI there is a branch for every possible value, and the observer splits along with the branching, so there is an observer on every branch. After N trials of the binary case, there are 2^N branches, with an observer (copy of the original experimenter) on every branch. These all exist equally, so your idea of weighting the branches according to the amplitudes makes no sense: there can be no "degrees of existence". All the observers exist equally, so all are equally entitled to count zeros to get an estimate of the underlying probability.

You're fighting a strawman, even back in 1957 Hugh Everett knew that you can't use branch counting to assign probability, I explained why in a post I sent to the list on November 5, I will repeat it now: 

Branch counting won't work: I measure the spin of an electron in the vertical direction and both the electron and I split into two, and there's a 50% chance "I" will see spin up and a 50% chance "I" will see spin down. So far branch counting seems to work. But before I started I made up my mind that if I see spin up I will do nothing, but if I see spin down then I will wait for 10 minutes and then measure the electron spin a second time but this time along the horizontal axis. And so the spin down world splits again into a spin right world and a spin left world. So now there's only one branch in the spin up line BUT three branches in the spin down line. If you use branch counting you'd have to say that in the first measurement the probability was not 50-50 as you originally thought, instead there was a 25% chance I would see spin up in a 75% chance I would see spin down. But something I do now can't affect the probability of an experiment I performed 10 minutes ago!

That's why when I draw a diagram of the worlds splitting on a piece of paper or a blackboard even though the lines I draw are two dimensional I like to think of those lines is having a little 3D thickness, the total sum of all the thickness of all the branches in the multiverse remains constant but each time a world split the resulting worlds become more numerous but thinner; although it always remains true that if you're betting on which universe you are likely to be in you should always place your money on being in the thicker one. 

I want to emphasize that this thickness business is NOT to be taken literally, it's just something that I happen to like because it's a visual analogy of the fact that the sum total of all probabilities always remains exactly the same, and that is 1. You may not like my analogy and that's OK because there's no disputing matters of taste. But disliking branch counting is not a matter of taste because such a dislike is not subjective, branch counting objectively doesn't work.

John K Clark    See what's on my new list at  Extropolis
6sq


Quentin Anciaux

unread,
Feb 11, 2025, 7:27:28 AMFeb 11
to everyth...@googlegroups.com
Bruce,

I'll still give it a try to get a discussion (dumb me).

If your response boils down to "this is nonsense" and "you’re not clever enough," then you’re not engaging with the actual argument. The question is not whether the Schrödinger equation explicitly encodes the Born rule—it does not, just as it does not encode classical probability either. The question is whether MWI can recover the Born rule without adding collapse, and there are multiple serious approaches to doing so.

Your claim that "MWI does not match experiments because it cannot get the Born rule" is just an assertion. The Schrödinger equation does evolve amplitudes, and those amplitudes do determine the structure of the wavefunction. You dismiss measure as meaningless, yet every quantum experiment confirms that the statistics follow . If naive branch counting were correct, experiments would contradict the Born rule—but they do not. That means something in MWI must account for it.

Saying "all branches exist equally" ignores what "equally" even means in a probabilistic context. Probability is not about "some things happen while others don’t"—that’s a description, not an explanation. Classical probability arises because there are more ways for some outcomes to occur than others. In MWI, the weight of a branch is not a degree of existence—it’s a statement about how many copies of an observer find themselves in that outcome.

If you have a counterargument, provide one—just dismissing the approach as "fantasy" without addressing the core point doesn’t make your position stronger. If you want to argue that MWI cannot recover the Born rule, then you need to explain why all proposed derivations (Deutsch-Wallace, Zurek’s envariance, self-locating uncertainty, etc.) are fundamentally flawed, not just assert that they don’t count.

Quentin 

Bruce Kellett

unread,
Feb 11, 2025, 5:29:03 PMFeb 11
to everyth...@googlegroups.com
On Tue, Feb 11, 2025 at 11:27 PM Quentin Anciaux <allc...@gmail.com> wrote:
Bruce,

I'll still give it a try to get a discussion (dumb me).

If your response boils down to "this is nonsense" and "you’re not clever enough," then you’re not engaging with the actual argument. The question is not whether the Schrödinger equation explicitly encodes the Born rule—it does not, just as it does not encode classical probability either. The question is whether MWI can recover the Born rule without adding collapse, and there are multiple serious approaches to doing so.

Your claim that "MWI does not match experiments because it cannot get the Born rule" is just an assertion. The Schrödinger equation does evolve amplitudes, and those amplitudes do determine the structure of the wavefunction. You dismiss measure as meaningless, yet every quantum experiment confirms that the statistics follow . If naive branch counting were correct, experiments would contradict the Born rule—but they do not. That means something in MWI must account for it.

Saying "all branches exist equally" ignores what "equally" even means in a probabilistic context. Probability is not about "some things happen while others don’t"—that’s a description, not an explanation. Classical probability arises because there are more ways for some outcomes to occur than others. In MWI, the weight of a branch is not a degree of existence—it’s a statement about how many copies of an observer find themselves in that outcome.

If you have a counterargument, provide one—just dismissing the approach as "fantasy" without addressing the core point doesn’t make your position stronger. If you want to argue that MWI cannot recover the Born rule, then you need to explain why all proposed derivations (Deutsch-Wallace, Zurek’s envariance, self-locating uncertainty, etc.) are fundamentally flawed, not just assert that they don’t count.

Many others have pointed out the deficiencies of the arguments by Deutsch-Wallace, Zurek, and many others. The problems usually boil down to the fact that these attempts implicitly assume the Born rule from the outset. For example, as soon as you involve separate non-interacting worlds, and rely on decoherence to give (approximate) orthogonality, then you have assumed that small amplitudes correspond to low probability -- which is just the Born rule. Similar considerations apply to other arguments. The paper by Kent that I referenced earlier looks at many of the arguments and points out the many problems.

As far as your basic argument goes, there is no evidence that the Schrodinger equation itself "evolves the amplitude", or that it gives different numbers of observers on branches according to the amplitudes. The idea of "branch weight" is just a made-up surrogate for assuming a probabilistic interpretation; namely, the Born rule.

The position I am taking tries to avoid all these spurious additional assumptions/interpretations. We take the Schrodinger equation with the Everettian proposal that all outcomes occur on every trial, and see where that takes us. In the binary case, with repeated trials on similarly prepared systems, we get the 2^N binary strings. We get the same 2^N strings whatever amplitudes the initial wave function started with. There is only one copy of the initial observer on every such binary sequence. That observer can count the number of zeros in his/her string to estimate the probability. Since the string is independent of the amplitudes, the same proportion of ones will be found for the same string in every case. Since the Born probability varies according to the original amplitude, we find that this simplest version of many worlds is in conflict with the Born rule. Other conflicts with the Born rule are evident in other ways -- I have mentioned some of them previously. To go beyond this you have to introduce complications that are not inherent in the original Schrodinger equation and are largely incompatible with simple unitary state evolution.

Bruce

Quentin Anciaux

unread,
Feb 11, 2025, 5:51:17 PMFeb 11
to everyth...@googlegroups.com
Bruce,

You argue that quantum mechanics follows the Born rule, but MWI does not. However, this assumes that MWI should reproduce the Born rule directly from the Schrödinger equation without additional structure. The issue is not whether the Born rule holds in quantum mechanics—it clearly does—but whether MWI can account for it without collapse.

You say that deriving the Born rule in MWI requires additional assumptions, but that’s not a valid objection—it’s an open question that multiple approaches are trying to address. Decision theory, envariance, and self-locating uncertainty all attempt to show why observers should expect probabilities to follow . Dismissing them outright ignores that they provide serious motivation for why the Born rule emerges from unitary evolution

Your argument rests on the claim that all sequences exist independently of their amplitudes, meaning that counting sequences alone should determine probabilities. But this contradicts experimental results. If naive sequence counting were correct, we would observe a uniform distribution of outcomes across experiments, which we do not. The fact that quantum mechanics consistently follows suggests that something in the structure of MWI must explain why high-measure branches dominate experience.

You dismiss measure as a "made-up surrogate" for probability, but this ignores that measure is a mathematical property of the wavefunction, not an arbitrary postulate. Amplitudes determine the structure of the quantum state, and decoherence ensures that branches remain effectively independent. The question is whether measure also determines the relative frequency with which observers find themselves in different branches. If it did not, we would expect deviations from the Born rule, yet we see none.

The fact that multiple approaches attempt to derive the Born rule within MWI—decision theory, envariance, self-locating uncertainty—shows that this is an open question, not a settled failure. Simply asserting that MWI "does not follow the Born rule" ignores the very problem that these derivations attempt to solve. The Born rule is an observed fact, and MWI needs to explain it—but dismissing all attempts to do so does not make the problem go away.

You frame your argument as avoiding "spurious additional assumptions." But you are making an assumption yourself: that all branches contribute equally to experience. This is a choice, not a consequence of unitary evolution.

Quentin 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
It is loading more messages.
0 new messages