But then one is defining "this universe" post hoc after the photon lands on some point on the screen. One can do that, but this means that there are two possibilities that really exist. Whether that's considered to be in a single universe or two universes is just semantics. There are two classes of paths for the photon, one class is the set of paths through one slit, the other are the paths through the other slit.
Given that's the case, we can then do another measurement where we simply measure through which slit the photon goes. We then don't bother to let the photon move through toward the screen anymore, we just detect the outcome of measuring whether the photon after moving past either slits ends up being detected immediately after the left or the right slit.
If I then perform one such measurement, and I decide to go on vacation destination X if the photon is detected behind the left slit and I go to vacation destination Y if the result is the right slit, and I end up going to X, the question is if there exists a parallel world where I go to Y.
The question for people who would say that only one world where I go to X exists, is then to explain why both possibilities for the photon going to the left or right slit objectively exists when we detect the photon only at the screen, but only one possibility exists when we detect the photon directly after passing the slits.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/8ad48823-1135-483b-90fe-b9249c3257c1%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAMW2kArVxEco-aY%3D0xw1ktADavpJvwbhaXMgNrXUFBupQJCxyA%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/09f1e290-9a15-4e11-adc7-3fb43fe7bbbe%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAMW2kAq%3DoUbNSGZ3B7T945GGNv-Vovajc40kj8QC%3D7d5%2BgTHfg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/bea0de9d-e989-4da5-b152-5d7df5be11ba%40gmail.com.
Not all possibilities are realized, but those with non-zero amplitude are realized somewhere in the superposition. In MWI, "possibility" refers to branches with different measures, not to mere logical abstractions. A "possible" event with zero measure is equivalent to non-existence.
Not all possibilities are realized, but those with non-zero amplitude are realized somewhere in the superposition.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAMW2kAqMfb0Z5E%2B4THO4%3DH5AZHWjQ7zUeVw2ph8g0EKGr0qCeg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTSp0S5WjE4raWrURhm67a9Ost%3DuR_OKWh_E7WjkB8MYw%40mail.gmail.com.
You’re conflating mathematical possibilities with experiential frequencies. Yes, all 2^N sequences exist in the wavefunction, but their measures are not equal. The Born rule emerges because observers are not uniformly distributed across these sequences: almost all self-locating observers end up in high-measure branches. If you assume equal weighting, you’re rejecting the very structure of the wavefunction and replacing quantum mechanics with branch counting.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/354eb4a8-821b-4df9-a060-c3de7deff8b2%40gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLQb2u5qu0teDSjMNis_z--J7zYM6ppNqAdf2d%3DijDpcLA%40mail.gmail.com.
Bruce,The measure isn’t something I’m inventing, it’s implicit in the squared amplitudes of the wavefunction. The Schrödinger equation preserves the L² norm, and decoherence ensures that branches with extremely low amplitudes contribute negligibly to observer statistics. Ignoring that structure and treating all branches as equally weighted is not quantum mechanics, it’s just branch counting under a flat prior.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTRTqsqc6SNUrp%3DKj-epnrqj9KGDRK8wb%2Bvpfu0-zyTEQ%40mail.gmail.com.
Bruce,
You’re conflating two different levels: the combinatorial existence of 2^N sequences and their statistical weight. Yes, the Schrodinger equation produces all 2^N sequences regardless of amplitudes, but the amplitudes control the measure over those sequences. Changing |a|² and |b|² doesn’t alter which sequences exist, it alters how observers are distributed among them. That’s exactly what the Born rule captures: almost all self-locating observers end up in high-measure branches. Treating all sequences as equally weighted is equivalent to replacing quantum mechanics with flat branch counting, which is not what the formalism prescribes.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLRrtgGGPefJbTZMy19hfokA1u7Hpa49cOt_-ELfy%3Dy8iA%40mail.gmail.com.
Bruce,
You’re assuming that each sequence has equal weight by construction, but that’s exactly what quantum mechanics denies. The Schrödinger equation doesn’t just produce sequences, it produces them with amplitudes, and those amplitudes determine the statistical weight via |a|² and |b|². The fact that sequences exist mathematically doesn’t mean they are sampled uniformly by observers. Without introducing measure, your argument effectively replaces quantum mechanics with flat branch counting, which is inconsistent with the formalism itself.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Bruce,
You keep assuming one observer per sequence and uniform sampling, but that assumption is yours, not Everett’s. In Everett’s framework, the relative weights aren’t arbitrary — they’re determined by the amplitudes in the wavefunction. By rejecting that, you’re refuting a simplified model of your own making, not MWI itself. If your argument truly applied to Everett’s theory, you should be able to show how it addresses the role of measure instead of ignoring it.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTJiaTBF9ONrzEXr5nX3m2Y%3DhWrFG0ZqcfecnPLDvEaTQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLRPTdg1tp4hTamZMEBD4hzy_w5ysHDOPfpeqTN02NOLeg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTJiaTBF9ONrzEXr5nX3m2Y%3DhWrFG0ZqcfecnPLDvEaTQ%40mail.gmail.com.
Bruce,
You keep repeating the same circular reasoning: you assume one observer per branch, then conclude uniform sampling by observers, then use that to claim equal measure. But Everett’s relative-state formulation does not require discrete worlds or a uniform observer distribution — that is your interpretation, not a derivation.
Carrying amplitudes through unitarity without giving them any role is precisely the point of contention. Ignoring them does not make their influence vanish; it only means you are not engaging with the core question. If you assert that your construction proves all branches have equal measure, then you are assuming the conclusion you're trying to establish.
If you are certain this invalidates any amplitude-based measure and thus the Born rule, the proper way forward is still the same: publish the derivation and let it stand under peer review. Repeating it here without addressing counterarguments doesn't make it more correct, just more dogmatic.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLS%2BUbym-d10vK4L5-R3LgpXCs%2BsWMjq3r%2BDKK2YkeC9LA%40mail.gmail.com.
Bruce,
Your reasoning is flawed because you conflate the existence of sequences with their measure. Yes, the set of 2^N sequences is the same regardless of a and b, but the amplitudes attached to them are not irrelevant bookkeeping. In Everett's framework, the squared amplitudes define the structure of the wavefunction and thus the density of observer-instances across sequences.
Setting a = b to argue that all sequences have equal weight and then generalizing that conclusion to arbitrary a and b is invalid.
It ignores the very feature that makes different amplitudes significant: they change the relative contribution of each branch to future correlations and observed frequencies. You cannot dismiss this and still claim to be deriving anything about measure.
If you want to argue that amplitudes have no bearing on measure, you need an independent justification for why quantum mechanics' core mathematical structure and its experimental validation via the Born rule should be discarded. Otherwise, you are assuming your conclusion.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLRXquVqkS%3DL71C%3DBn6SRHRsv-HVPi_idhGSmTtrmUaduA%40mail.gmail.com.
Bruce,
If your derivation is as solid as you claim, then a skeptical referee is exactly who you should want to convince. Repeating the same argument here without engaging with the role of amplitudes will not make it any stronger. You cannot dismiss amplitudes entirely and then claim to have explained why measure must be uniform, that is circular.If you truly believe your reasoning refutes the Born rule within Everett’s framework, then publishing it is the only way to settle the matter. Otherwise, endlessly asserting it here looks less like confidence and more like avoidance.Your entire argument hinges on assuming uniform observer sampling by postulating one observer per branch.
But that is precisely the point under debate, not a derived result. If you ignore the role of amplitudes in defining the structure of the wavefunction, you're not engaging with Everett's formulation, only with your own simplified model.Until you demonstrate why amplitudes should be irrelevant within unitary evolution, claiming equal weights is just assuming your conclusion.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTqmwjWPL45KfJwEJRqr5_VOZETJZKZaCE3tZamgVBXbg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTqmwjWPL45KfJwEJRqr5_VOZETJZKZaCE3tZamgVBXbg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/f3274fef-f07c-498b-b19e-b4c742e064ed%40gmail.com.
Brent,
In Everettian QM, the Born rule applies to coarse-grained outcomes, not to individual fine-grained sequences treated as equiprobable.
The amplitudes are not just bookkeeping: their squared norm defines the measure, which determines how observer-instances are distributed.
Think of a lottery with one million tickets, but where 400,000 of them are identical copies of the same number. All tickets "exist," but they are not equally weighted when predicting what a typical observer will see.
Similarly, in your N=6 example, 011000 and 001010 belong to the same coarse-grained class of "2 successes out of 6," and the combined measure of all such sequences follows the Born rule.
Assuming each branch has equal weight and uniform observer sampling creates the contradiction, Everett’s formulation does not require that assumption.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAMW2kAonfnOn-arm%2BFSEjHDNLgtWu4JbRNko80Fo4d8FqnRgTA%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAPCWU3LDe3qoLj_Kp1M9oxt1OR%3DXj2aGRVZuZwOr_t7b3Y6jWg%40mail.gmail.com.
I think some specificity would help this debate. Suppose N=6, so there are 64 different sequences in 64 different worlds. The number of observers is irrelevants; we can suppose the results are recorded mechanically in each world. Further suppose that a=b so there is no question of whether amplitudes are being respected. Then in one of the worlds we have 011000. Per the Born rule its probability is 0.2344. In MWI it is 1/64=0.0156. The difference arises because the observers applying the Born rule looks at it as an instance of 2 out of 6 successes.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/75aebb9c-106d-4f19-8236-a667b345681b%40gmail.com.
On Thu, Aug 28, 2025 at 5:16 AM Brent Meeker <meeke...@gmail.com> wrote:I think some specificity would help this debate. Suppose N=6, so there are 64 different sequences in 64 different worlds. The number of observers is irrelevants; we can suppose the results are recorded mechanically in each world. Further suppose that a=b so there is no question of whether amplitudes are being respected. Then in one of the worlds we have 011000. Per the Born rule its probability is 0.2344. In MWI it is 1/64=0.0156. The difference arises because the observers applying the Born rule looks at it as an instance of 2 out of 6 successes.
The trouble with this is that you are treating this as an instance of Bernoulli trials with probability p= 0.5. When every outcome occurs with every trial we no longer have a Binomial distribution The Binomial distribution assumes that you have x successes out of N trials. In the Everettian case you have one success on every trial.
So your probability above for 2 successes applies to Bernoulli trials with one as the 'success'. The thing is that the probability of getting a zero is also 1/2, so we also have four successes out of six trials in your example. The Binomial probability for this result is also 0.2344. Actually, if we regard this experiment as a test of the Born rule, we have four zeros in 6 trials, which gives an estimate of the probability as 4/6 = 0.667, or as two ones in 6 trials which gives an estimate of the probability as 2/6 = 0.333, neither estimate is equivalent to |a|^2 = 0.5. The difference becomes more pronounced as N increases. The problem with your analysis is that you are assuming a binomial distribution. and we do not have any such distribution.
Bruce
So why can't the MWI observer do the same calculation? He certainly can. He can apply the Born rule. But when he does so, it can't be interpreted as a probability of his branch since such probabilities would add up to much more than 1.0 when summed over the 64 different worlds. From the standpoint of statistics 011000 is the same as 001010 and their probabilities sum. Their difference is just incidental, but they are different worlds in MWI and summing them makes no sense.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLRfM7Gh8pVsvwqT6cTiOGFavvRSXncFmhzTiAk7VAtvDA%40mail.gmail.com.
On 8/27/2025 5:07 PM, Bruce Kellett wrote:
That's how an experimenter will compute the probability of 2 successes in 6 trials given p=0.5. Yes, the probability estimate is 0.33 given 2 in 6. The probability of the result given p is not generally the same as the estimate of p given the result. The former is binomial distributed. The estimates of p aren't part of a distribution since they don't add up to 1.On Thu, Aug 28, 2025 at 5:16 AM Brent Meeker <meeke...@gmail.com> wrote:I think some specificity would help this debate. Suppose N=6, so there are 64 different sequences in 64 different worlds. The number of observers is irrelevants; we can suppose the results are recorded mechanically in each world. Further suppose that a=b so there is no question of whether amplitudes are being respected. Then in one of the worlds we have 011000. Per the Born rule its probability is 0.2344. In MWI it is 1/64=0.0156. The difference arises because the observers applying the Born rule looks at it as an instance of 2 out of 6 successes.
The trouble with this is that you are treating this as an instance of Bernoulli trials with probability p= 0.5. When every outcome occurs with every trial we no longer have a Binomial distribution The Binomial distribution assumes that you have x successes out of N trials. In the Everettian case you have one success on every trial.
So your probability above for 2 successes applies to Bernoulli trials with one as the 'success'. The thing is that the probability of getting a zero is also 1/2, so we also have four successes out of six trials in your example. The Binomial probability for this result is also 0.2344. Actually, if we regard this experiment as a test of the Born rule, we have four zeros in 6 trials, which gives an estimate of the probability as 4/6 = 0.667, or as two ones in 6 trials which gives an estimate of the probability as 2/6 = 0.333, neither estimate is equivalent to |a|^2 = 0.5. The difference becomes more pronounced as N increases. The problem with your analysis is that you are assuming a binomial distribution. and we do not have any such distribution.
The normal approximation for large N, as Jesse seems to assume, simply does not hold, since the distribution is not binomial.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_N1_wYg7aQBMabdigBEnR4uRJuArSC0Kiv26brfzDOg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_N1_wYg7aQBMabdigBEnR4uRJuArSC0Kiv26brfzDOg%40mail.gmail.com.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_N1_wYg7aQBMabdigBEnR4uRJuArSC0Kiv26brfzDOg%40mail.gmail.com.
You were discussing a case of this form: "This is easily seen if one considers a wave function with a binary outcome, |0> and |1> for example. After N repeated trials, one has 2^N strings of possible outcome sequences. One can count the number of, say, ones in each possible outcome sequence."If we are interested in statistics for N trials, let's define a "supertrial" as a sequence of N trials of the individual measurement, and say that we are repeating many supertrials and recording the results of all the individual trials in each supertrial using some kind of physical memory (persistent 'pointer states'). Each supertrial has 2^N possible outcomes, and for a given supertrial outcome O (like up, down, up, up, up, down for N=6) you can define a measurement operator on the pointer states whose eigenvalues correspond to what the records would tell you about the fraction of supertrials where the outcome was O. If I'm understanding the result in those references correctly, then if one models the interaction between quantum system, measuring apparatus, and records using only the deterministic Schrodinger equation, without any collapse assumption or Born rule, one can show that in the limit as the number of supertrials goes to infinity, all the amplitude for the whole system including the records becomes concentrated on state vectors that are parallel to the eigenvector of the measurement operator with the eigenvalue that exactly matches the frequency of outcome O that would have been predicted if you *had* used the collapse assumption and Born rule for individual measurements. And this should be true even if the probability for up vs. down on individual measurements was not 50/50 given the experimental setup.
On Fri, Aug 29, 2025 at 2:47 AM Jesse Mazer <laser...@gmail.com> wrote:You were discussing a case of this form: "This is easily seen if one considers a wave function with a binary outcome, |0> and |1> for example. After N repeated trials, one has 2^N strings of possible outcome sequences. One can count the number of, say, ones in each possible outcome sequence."If we are interested in statistics for N trials, let's define a "supertrial" as a sequence of N trials of the individual measurement, and say that we are repeating many supertrials and recording the results of all the individual trials in each supertrial using some kind of physical memory (persistent 'pointer states'). Each supertrial has 2^N possible outcomes, and for a given supertrial outcome O (like up, down, up, up, up, down for N=6) you can define a measurement operator on the pointer states whose eigenvalues correspond to what the records would tell you about the fraction of supertrials where the outcome was O. If I'm understanding the result in those references correctly, then if one models the interaction between quantum system, measuring apparatus, and records using only the deterministic Schrodinger equation, without any collapse assumption or Born rule, one can show that in the limit as the number of supertrials goes to infinity, all the amplitude for the whole system including the records becomes concentrated on state vectors that are parallel to the eigenvector of the measurement operator with the eigenvalue that exactly matches the frequency of outcome O that would have been predicted if you *had* used the collapse assumption and Born rule for individual measurements. And this should be true even if the probability for up vs. down on individual measurements was not 50/50 given the experimental setup.I haven't looked into this in any detail, but it seems to be a recasting of an idea that has been around for a long time. This idea hasn't made it into the mainstream because the details failed to work out.
There are all sorts of problems with the idea, and it doesn't appear to translate well to the argument I am making. The 2^N sequences that result from repeated measurements on the basic binary system do not form a measurement in themselves. There is no operator for this, and no eigenfunctions and there is no obvious outcome.
On Thu, Aug 28, 2025 at 8:05 PM Bruce Kellett <bhkel...@gmail.com> wrote:On Fri, Aug 29, 2025 at 2:47 AM Jesse Mazer <laser...@gmail.com> wrote:You were discussing a case of this form: "This is easily seen if one considers a wave function with a binary outcome, |0> and |1> for example. After N repeated trials, one has 2^N strings of possible outcome sequences. One can count the number of, say, ones in each possible outcome sequence."If we are interested in statistics for N trials, let's define a "supertrial" as a sequence of N trials of the individual measurement, and say that we are repeating many supertrials and recording the results of all the individual trials in each supertrial using some kind of physical memory (persistent 'pointer states'). Each supertrial has 2^N possible outcomes, and for a given supertrial outcome O (like up, down, up, up, up, down for N=6) you can define a measurement operator on the pointer states whose eigenvalues correspond to what the records would tell you about the fraction of supertrials where the outcome was O. If I'm understanding the result in those references correctly, then if one models the interaction between quantum system, measuring apparatus, and records using only the deterministic Schrodinger equation, without any collapse assumption or Born rule, one can show that in the limit as the number of supertrials goes to infinity, all the amplitude for the whole system including the records becomes concentrated on state vectors that are parallel to the eigenvector of the measurement operator with the eigenvalue that exactly matches the frequency of outcome O that would have been predicted if you *had* used the collapse assumption and Born rule for individual measurements. And this should be true even if the probability for up vs. down on individual measurements was not 50/50 given the experimental setup.I haven't looked into this in any detail, but it seems to be a recasting of an idea that has been around for a long time. This idea hasn't made it into the mainstream because the details failed to work out.Can you point to any sources that explain specific ways the details fail to work out? David Z Albert is very knowledgeable about results relevant to interpretation of QM so I'd be surprised if he missed any technical critique.
Of course there is the philosophical argument that this doesn't resolve the measurement problem because it doesn't lead to definite results for individual trials (or supertrials) but that's not taking issue with the technical claim about measuring frequencies of results in the limit of infinite trials (and David Z Albert brings up this philosophical objection in the last paragraph before section VI at https://books.google.com/books?id=_HgF3wfADJIC&lpg=PP1&pg=PA238 , and then in section VI he goes on to talk about why he thinks this objection means the fact about frequencies in the limit doesn't really resolve the measurement problem)There are all sorts of problems with the idea, and it doesn't appear to translate well to the argument I am making. The 2^N sequences that result from repeated measurements on the basic binary system do not form a measurement in themselves. There is no operator for this, and no eigenfunctions and there is no obvious outcome.I had thought that for any measurable quantity including coarse-grained statistical ones, it was possible to construct a measurement operator in QM--doing some googling, it may be that for some coarse-grained quantities one has to use a "positive operator valued measure", see answer at https://physics.stackexchange.com/a/791442/59406 , and according to https://quantumcomputing.stackexchange.com/a/29326 this is not itself an operator though it is a function defined in terms of a collection of positive operators. And the page at https://www.damtp.cam.ac.uk/user/hsr1000/stat_phys_lectures.pdf also mentions that in quantum statistical mechanics, macrostates can be defined in terms of the density operator which is used to describe mixed states (ones where we don't know the precise quantum microstate and just assign classical probabilities to different possible microstates). I don't know if either was used here, but p. 13 of the paper I mentioned at https://www.academia.edu/6975159/Quantum_dispositions_and_the_notion_of_measurement indicates that some type of operator was used to derive the result about frequencies in the limit:
"The ingenious method of introducing a quantum-mechanical equivalent of probabilities that Mittelstaedt follows in his approach relies on a new operator F^N_kwhose ‘intuitive’ role is to measure the relative frequency of the outcome a_k in a given sequence of N outcomes."The full details would presumably be in Mittelstaedt's book The Interpretation of Quantum Mechanics and the Measurement Process in the paper's bibliography.
On Sat, Aug 30, 2025 at 2:04 AM Jesse Mazer <laser...@gmail.com> wrote:On Thu, Aug 28, 2025 at 8:05 PM Bruce Kellett <bhkel...@gmail.com> wrote:On Fri, Aug 29, 2025 at 2:47 AM Jesse Mazer <laser...@gmail.com> wrote:You were discussing a case of this form: "This is easily seen if one considers a wave function with a binary outcome, |0> and |1> for example. After N repeated trials, one has 2^N strings of possible outcome sequences. One can count the number of, say, ones in each possible outcome sequence."If we are interested in statistics for N trials, let's define a "supertrial" as a sequence of N trials of the individual measurement, and say that we are repeating many supertrials and recording the results of all the individual trials in each supertrial using some kind of physical memory (persistent 'pointer states'). Each supertrial has 2^N possible outcomes, and for a given supertrial outcome O (like up, down, up, up, up, down for N=6) you can define a measurement operator on the pointer states whose eigenvalues correspond to what the records would tell you about the fraction of supertrials where the outcome was O. If I'm understanding the result in those references correctly, then if one models the interaction between quantum system, measuring apparatus, and records using only the deterministic Schrodinger equation, without any collapse assumption or Born rule, one can show that in the limit as the number of supertrials goes to infinity, all the amplitude for the whole system including the records becomes concentrated on state vectors that are parallel to the eigenvector of the measurement operator with the eigenvalue that exactly matches the frequency of outcome O that would have been predicted if you *had* used the collapse assumption and Born rule for individual measurements. And this should be true even if the probability for up vs. down on individual measurements was not 50/50 given the experimental setup.I haven't looked into this in any detail, but it seems to be a recasting of an idea that has been around for a long time. This idea hasn't made it into the mainstream because the details failed to work out.Can you point to any sources that explain specific ways the details fail to work out? David Z Albert is very knowledgeable about results relevant to interpretation of QM so I'd be surprised if he missed any technical critique.I quote David Albert from his contribution to the book "Many Worlds? Everett, Quantum Theory and Relativity" (Oxford,2010)"But the business of parlaying this thought into a fully worked-out account of probability in the Everett picture quickly runs into very familiar and very discouraging sorts of trouble." I don't have any more detail about this, but it seems from the fact that this is not mainstream, that these difficulties proved insurmountable. For instance, it uses a frequentist definition of probability, and this is known to be full of problems.
Of course there is the philosophical argument that this doesn't resolve the measurement problem because it doesn't lead to definite results for individual trials (or supertrials) but that's not taking issue with the technical claim about measuring frequencies of results in the limit of infinite trials (and David Z Albert brings up this philosophical objection in the last paragraph before section VI at https://books.google.com/books?id=_HgF3wfADJIC&lpg=PP1&pg=PA238 , and then in section VI he goes on to talk about why he thinks this objection means the fact about frequencies in the limit doesn't really resolve the measurement problem)
There are all sorts of problems with the idea, and it doesn't appear to translate well to the argument I am making. The 2^N sequences that result from repeated measurements on the basic binary system do not form a measurement in themselves. There is no operator for this, and no eigenfunctions and there is no obvious outcome.I had thought that for any measurable quantity including coarse-grained statistical ones, it was possible to construct a measurement operator in QM--doing some googling, it may be that for some coarse-grained quantities one has to use a "positive operator valued measure", see answer at https://physics.stackexchange.com/a/791442/59406 , and according to https://quantumcomputing.stackexchange.com/a/29326 this is not itself an operator though it is a function defined in terms of a collection of positive operators. And the page at https://www.damtp.cam.ac.uk/user/hsr1000/stat_phys_lectures.pdf also mentions that in quantum statistical mechanics, macrostates can be defined in terms of the density operator which is used to describe mixed states (ones where we don't know the precise quantum microstate and just assign classical probabilities to different possible microstates). I don't know if either was used here, but p. 13 of the paper I mentioned at https://www.academia.edu/6975159/Quantum_dispositions_and_the_notion_of_measurement indicates that some type of operator was used to derive the result about frequencies in the limit:There is no single outcome from a repetition of the N trials and 2^N sequences. So it can't be an eigenvalue of some quantum operator.