On 5/12/2020 7:12 PM, Bruce Kellett wrote:
> If we now turn our attention to the quantum case, we have a
> measurement (or sequence of measurements) on a binary quantum state
>
> |psi> = a|0> + b|1>,
>
> where |0> is to be counted as a "success", |1> represents anything
> else or a "fail", and a^2 + b^2 = 1. In a single measurement, we can
> get either |0> or 1>, (or we get both on separate branches in the
> Everettian case). Over a sequence of N similar trials, we get a set of
> 2^N sequences of all possible bit strings of length N. (These all
> exist in separate "worlds" for the Everettian, or simply represent
> different "possible worlds" (or possible sequences of results) in the
> single-world case.) This set of bit strings is independent of the
> coefficients 'a' and 'b' from the original state |psi>, but if we
> carry the amplitudes of the original superposition through the
> sequence of results, we find that for every zero in a bit string we
> get a factor of 'a', and for every one, we get a factor of 'b'.
This is what you previously argued was not part of the Schroedinger
equation and was a cheat to slip the Born rule in. It's what I said was
Carroll's "weight" or splitting of many pre-existing worlds.
>
> Consequently, the amplitude multiplying any sequence of M zeros and
> (N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a'
> to find the turning point (and the value of 'a' that maximizes this
> amplitude), we find
>
> |a|^2 = M/N,
Maximizing this amplitude, instead of simply counting the number of
sequences with M zeroes as a fraction of all sequences (which is
independent of a) is effectively assuming |a|^2 is a probability
weight. The "most likely" number of zeroes, the number that occurs most
often in the 2^N sequences, is is N/2.
When you do this, you can see by analogy that it is a probability, but one did not assume this at the start.
Bruce
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAFxXSLQ-%2BoGsRUcNH8NZ3Ypd9eR5Y%2BR6HwweBSbTbB6JR-T-mw%40mail.gmail.com.
On 5/12/2020 10:08 PM, Bruce Kellett wrote:
On Wed, May 13, 2020 at 2:06 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
> Consequently, the amplitude multiplying any sequence of M zeros and
> (N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a'
> to find the turning point (and the value of 'a' that maximizes this
> amplitude), we find
>
> |a|^2 = M/N,
Maximizing this amplitude, instead of simply counting the number of
sequences with M zeroes as a fraction of all sequences (which is
independent of a) is effectively assuming |a|^2 is a probability
weight. The "most likely" number of zeroes, the number that occurs most
often in the 2^N sequences, is is N/2.
I agree that if you simply look for the most likely number of zeros, ignoring the amplitudes, then that is N/2. But I do not see that maximising the amplitude for any particular value of M is to effectively assume that it is a probability.
I think it is. How would you justify ".. the amplitude multiplying any sequence of M zeros and (N-M) ones, is a^M b^(N-M)..." except by saying a is a probability, so a^M is the probability of M zeroes. If it's not a probability why should it be multiplied into and expression to be maximized?
In any case though, I don't see the form of the Born rule as something problematic. It's getting from counting branches to probabilities.
Once you assume there is a probability measure, you're pretty much forced to the Born rule as the only consistent probability measure.
In any case though, I don't see the form of the Born rule as something problematic. It's getting from counting branches to probabilities. Once you assume there is a probability measure, you're pretty much forced to the Born rule as the only consistent probability measure.
Brent
There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.Your argument has a tight fit with this for O_i = ρ_{ii}.The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.
The argument by Carroll and Sebens, using a concept of the wave function as an update mechanism, is somewhat Bayesian.
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to the wave function as a similar device for a ψ-ontological interpretation.I do though agree if there is a proof for the Born rule that is may not depend on some particular quantum interpretation. If the Born rule is some unprovable postulate then it would seem plausible that any sufficiently strong quantum interpretation may prove the Born rule or provide the ancillary axiomatic structure necessary for such a proof. In other words maybe quantum interpretations are essentially unprovable physical axioms that if sufficiently string provide a proof of the Born rule.
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
On 17 May 2020, at 11:39, 'scerir' via Everything List <everyth...@googlegroups.com> wrote:I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
Bruce wrote:
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it.
On 17 May 2020, at 11:39, 'scerir' via Everything List <everyth...@googlegroups.com> wrote:
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
That is the frequency approach to probability. Strictly speaking it is false, as it gives the wrong results for the “non normal history” (normal in the sense of Gauss). But it works retire well in the normal world (sorry for being tautological).
At its antipode, there is the bayesian “subjective probabilities”, which makes sense when complete information is available . So it does not make sense in many practical situation.
Remark: the expression “subjective probabilities” is used technically for this Bayesian approach, and is quite different from the first person indeterminacy that Everett call “subjective probabilities”. The “subjective probabilities” of Everett are “objective probabilities”, and can be defined trough a frequency operator in the limit.
The same occur in arithmetic, where the subjective (first person) probabilities are objective (they obey objective, sharable, laws).
Naïve many-worlds view are not sustainable, but there is no problem with consistent histories, and 0 worlds.
Bruno
Bruce wrote:
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be.
Deriving the Born rule within the context of QM seems to me a rather
futile effort as you still have the formalism of QM itself that is then
unexplained. So, I think one has to tackle QM itself. It seems t me
quite plausible that QM gives an approximate description of a multiverse
of algorithms. So, we are then members of such a multiverse, this then
includes alternative versions of us who found different results in
experiments, but the global structure of this multiverse is something
that QM does not describe adequately.
QM then gives a local approximation of this multiverse that's valid in
the neighborhood of a given algorithm, That algorithm can be an observer
who has found some experimental result, and the local approximation
gives a description of the "nearby algorithms" that are processing
alternative measurement results. The formalism of QM can then arise due
to having to sum over all algorithms that fall within the criterion of
being close to the particular algorithm that is processing some
particular data. This is then a constrained summation over all possible
algorithms. One can then replace such a constrained summation by an
unrestricted summation and implement the constraint by including phase
factors of the form exp(i u constraint function) where constraint
function = 0 for the terms of the original constrained summation. One
can then write the original summation as an integral over u.
On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.Your argument has a tight fit with this for O_i = ρ_{ii}.The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.
The argument by Carroll and Sebens, using a concept of the wave function as an update mechanism, is somewhat Bayesian.It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to the wave function as a similar device for a ψ-ontological interpretation.I do though agree if there is a proof for the Born rule that is may not depend on some particular quantum interpretation. If the Born rule is some unprovable postulate then it would seem plausible that any sufficiently strong quantum interpretation may prove the Born rule or provide the ancillary axiomatic structure necessary for such a proof. In other words maybe quantum interpretations are essentially unprovable physical axioms that if sufficiently string provide a proof of the Born rule.I would agree that the Born rule is unlikely to be provable within some model of quantum mechanics -- particularly if that model is deterministic, as is many-worlds. The mistake that advocates of many-worlds are making is to try and graft probabilities, and the Born rule, on to a non-probabilistic model. That endeavour is bound to fail. (In fact, many have given up on trying to incorporate any idea of 'uncertainty' into their model -- this is what is known as the "fission program".) One of the major problems people like Deutsch, Carroll, and Wallace encounter is trying to reconcile Everett with David Lewis's "Principal Principle", which is the rule that one should align one's personal subjective degrees of belief with the objective probabilities. When these people essentially deny the existence of objective probabilities, they have trouble reconciling subjective beliefs with anything at all.Bruce
On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.Your argument has a tight fit with this for O_i = ρ_{ii}.The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.Objective probabilities are frequentism.
The idea from a probability perspective is that one has a sample space with a known distribution.
Your argument, which I agree was my first impression when I encountered the Bayesian approach to QM by Fuchs and Schack, who I have had occasions to talk to. My impression right way was entirely the same; we have operators with outcomes and they have a distribution etc according to Born rule. However, we have a bit of a sticking point; is Born's rule really provable? We most often think in a sample space frequentist manner with regards to the Born rule. However, it is at least plausible to think of the problem from a Bayesian perspective, and where the probabilities have become known is when the Bayesian updates have become very precise.
However, all this talk of probability theory may itself be wrong. Quantum mechanics derives probabilities or distributions or spectra, but it really is a theory of amplitudes or the density matrix. The probabilities come with modulus square or the trace over the density matrix. Framing QM around an interpretation of probability may be wrong headed to begin with.
The argument by Carroll and Sebens, using a concept of the wave function as an update mechanism, is somewhat Bayesian.It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.I could I suppose talk to Fuchs about this. He regards QM as having this uncertainty principle, which we can only infer probabilities with a large number of experiments where upon we update Bayesian priors. Of course a frequentists, or a system based on relative frequencies, would say we just make lots of measurements and use that as a sample space. In the end either way works because QM appears to be a pure system that derives probabilities. In other words, since outcome occur acausally or spontaneously there are not meddlesome issues of incomplete knowledge. Because of this, and I have pointed it out, the two perspective end up being largely equivalent.
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to the wave function as a similar device for a ψ-ontological interpretation.I do though agree if there is a proof for the Born rule that is may not depend on some particular quantum interpretation. If the Born rule is some unprovable postulate then it would seem plausible that any sufficiently strong quantum interpretation may prove the Born rule or provide the ancillary axiomatic structure necessary for such a proof. In other words maybe quantum interpretations are essentially unprovable physical axioms that if sufficiently string provide a proof of the Born rule.I would agree that the Born rule is unlikely to be provable within some model of quantum mechanics -- particularly if that model is deterministic, as is many-worlds. The mistake that advocates of many-worlds are making is to try and graft probabilities, and the Born rule, on to a non-probabilistic model. That endeavour is bound to fail. (In fact, many have given up on trying to incorporate any idea of 'uncertainty' into their model -- this is what is known as the "fission program".) One of the major problems people like Deutsch, Carroll, and Wallace encounter is trying to reconcile Everett with David Lewis's "Principal Principle", which is the rule that one should align one's personal subjective degrees of belief with the objective probabilities. When these people essentially deny the existence of objective probabilities, they have trouble reconciling subjective beliefs with anything at all.BruceMaybe a part of the issue here is the role of counter-factual statements.
I think that may be more of an issue here than with the objective vs subjective perspective on QM. Most interpretations of QM are not counter-factual definite. In fact the only one I think is CF definite is deBroglie-Bohm. The Lewis idea was that CF statements are modal logical and thus there exists these alternative worlds.
On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys
⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
Your argument has a tight fit with this for O_i = ρ_{ii}.
The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.
I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.
Objective probabilities are frequentism.
The idea from a probability perspective is that one has a sample space with a known distribution. Your argument, which I agree was my first impression when I encountered the Bayesian approach to QM by Fuchs and Schack, who I have had occasions to talk to. My impression right way was entirely the same; we have operators with outcomes and they have a distribution etc according to Born rule. However, we have a bit of a sticking point; is Born's rule really provable? We most often think in a sample space frequentist manner with regards to the Born rule. However, it is at least plausible to think of the problem from a Bayesian perspective, and where the probabilities have become known is when the Bayesian updates have become very precise.
However, all this talk of probability theory may itself be wrong. Quantum mechanics derives probabilities or distributions or spectra, but it really is a theory of amplitudes or the density matrix. The probabilities come with modulus square or the trace over the density matrix. Framing QM around an interpretation of probability may be wrong headed to begin with.
The argument by Carroll and Sebens, using a concept of the wave function as an update mechanism, is somewhat Bayesian.
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
I could I suppose talk to Fuchs about this. He regards QM as having this uncertainty principle, which we can only infer probabilities with a large number of experiments where upon we update Bayesian priors. Of course a frequentists, or a system based on relative frequencies, would say we just make lots of measurements and use that as a sample space. In the end either way works because QM appears to be a pure system that derives probabilities. In other words, since outcome occur acausally or spontaneously there are not meddlesome issues of incomplete knowledge. Because of this, and I have pointed it out, the two perspective end up being largely equivalent.
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to the wave function as a similar device for a ψ-ontological interpretation.
I do though agree if there is a proof for the Born rule that is may not depend on some particular quantum interpretation. If the Born rule is some unprovable postulate then it would seem plausible that any sufficiently strong quantum interpretation may prove the Born rule or provide the ancillary axiomatic structure necessary for such a proof. In other words maybe quantum interpretations are essentially unprovable physical axioms that if sufficiently string provide a proof of the Born rule.
I would agree that the Born rule is unlikely to be provable within some model of quantum mechanics -- particularly if that model is deterministic, as is many-worlds. The mistake that advocates of many-worlds are making is to try and graft probabilities, and the Born rule, on to a non-probabilistic model. That endeavour is bound to fail. (In fact, many have given up on trying to incorporate any idea of 'uncertainty' into their model -- this is what is known as the "fission program".) One of the major problems people like Deutsch, Carroll, and Wallace encounter is trying to reconcile Everett with David Lewis's "Principal Principle", which is the rule that one should align one's personal subjective degrees of belief with the objective probabilities. When these people essentially deny the existence of objective probabilities, they have trouble reconciling subjective beliefs with anything at all.
Bruce
Maybe a part of the issue here is the role of counter-factual statements. I think that may be more of an issue here than with the objective vs subjective perspective on QM. Most interpretations of QM are not counter-factual definite. In fact the only one I think is CF definite is deBroglie-Bohm. The Lewis idea was that CF statements are modal logical and thus there exists these alternative worlds. I am not sure to what degree MWI upholds that idea. However, that is also not necessarily an argument against the idea of these branching worlds.
LC
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/7df5caee-5cfb-44d3-9901-04fc3fd1315d%40googlegroups.com.
However, all this talk of probability theory may itself be wrong. Quantum mechanics derives probabilities or distributions or spectra, but it really is a theory of amplitudes or the density matrix. The probabilities come with modulus square or the trace over the density matrix. Framing QM around an interpretation of probability may be wrong headed to begin with.
LC
On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 5/17/2020 3:31 AM, Bruno Marchal wrote:
On 17 May 2020, at 11:39, 'scerir' via Everything List <everyth...@googlegroups.com> wrote:
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
That is the frequency approach to probability. Strictly speaking it is false, as it gives the wrong results for the “non normal history” (normal in the sense of Gauss). But it works retire well in the normal world (sorry for being tautological).
At its antipode, there is the bayesian “subjective probabilities”, which makes sense when complete information is available . So it does not make sense in many practical situation.
Remark: the expression “subjective probabilities” is used technically for this Bayesian approach, and is quite different from the first person indeterminacy that Everett call “subjective probabilities”. The “subjective probabilities” of Everett are “objective probabilities”, and can be defined trough a frequency operator in the limit.
That's questionable. For the frequencies to be correct the splitting must the uneven. But there's nothing in the Schoedinger evolution to produce this. If there are two eigenvalues and the Born probabilities are 0.5 and 0.5 then it works fine. But it the Born probabilities are 0.501 and 0.499 then there must be a thousand new worlds, yet the Schroedinger equation still only predicts two outcomes.
Brent
The same occur in arithmetic, where the subjective (first person) probabilities are objective (they obey objective, sharable, laws).
Naïve many-worlds view are not sustainable, but there is no problem with consistent histories, and 0 worlds.
Bruno
Bruce wrote:
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/51b10acb-1a60-cf89-f057-61ea55d2d35e%40verizon.net.
On Mon, May 18, 2020 at 11:20 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.Your argument has a tight fit with this for O_i = ρ_{ii}.The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.Objective probabilities are frequentism.Rubbish. Popper's original propensity ideas may have had frequentist overtones, but we can certainly move beyond Popper's outdated thinking. An objective probability is one that is an intrinsic property of an object, such as a radio-active nucleus. One can use relative frequencies or Bayesian updating to estimate these intrinsic probabilities experimentally. But neither relative frequencies nor Bayesian updating of subjective beliefs actually define what the probabilities are in quantum mechanics.
On Sunday, May 17, 2020 at 9:06:12 PM UTC-5, Bruce wrote:On Mon, May 18, 2020 at 11:20 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:Objective probabilities are frequentism.Rubbish. Popper's original propensity ideas may have had frequentist overtones, but we can certainly move beyond Popper's outdated thinking. An objective probability is one that is an intrinsic property of an object, such as a radio-active nucleus. One can use relative frequencies or Bayesian updating to estimate these intrinsic probabilities experimentally. But neither relative frequencies nor Bayesian updating of subjective beliefs actually define what the probabilities are in quantum mechanics.Probability and statistics are in part an empirical subject. This is not pure mathematics, and it is one reason why there is no single foundation. It would be as if linear algebra or any area of mathematics had two competing axiomatic foundation. There are also subdivisions in the frequentist and subjectivist camps, particularly the first of these.Quantum mechanics computes probabilities not according to some idea of incomplete knowledge, but according to amplitudes. There is no lack of information from the perspective of the observer or analyst, but it is intrinsic. From what I can see this means it is irrelevant whether one adopts a Bayesian or frequentist perspective. The division between these two approaches to probability correlate somewhat with interpretations of QM that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with Bayesian). The divisions between the two camps amongst statisticians is amazingly sharp and almost hostile. I think the issues is somewhat irrelevant.
Probability and statistics are in part an empirical subject. This is not pure mathematics, and it is one reason why there is no single foundation. It would be as if linear algebra or any area of mathematics had two competing axiomatic foundation. There are also subdivisions in the frequentist and subjectivist camps, particularly the first of these.Quantum mechanics computes probabilities not according to some idea of incomplete knowledge, but according to amplitudes. There is no lack of information from the perspective of the observer or analyst, but it is intrinsic. From what I can see this means it is irrelevant whether one adopts a Bayesian or frequentist perspective. The division between these two approaches to probability correlate somewhat with interpretations of QM that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with Bayesian). The divisions between the two camps amongst statisticians is amazingly sharp and almost hostile. I think the issues is somewhat irrelevant.LC
On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys
⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
Your argument has a tight fit with this for O_i = ρ_{ii}.
The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.
I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.
Objective probabilities are frequentism.
No necessarily. Objective probabilities may be based on symmetries and the principle of insufficient reason. I agree with Bruce; just because you measure a probability with frequency, that doesn't imply it must be based on frequentism.
The idea from a probability perspective is that one has a sample space with a known distribution. Your argument, which I agree was my first impression when I encountered the Bayesian approach to QM by Fuchs and Schack, who I have had occasions to talk to. My impression right way was entirely the same; we have operators with outcomes and they have a distribution etc according to Born rule. However, we have a bit of a sticking point; is Born's rule really provable? We most often think in a sample space frequentist manner with regards to the Born rule. However, it is at least plausible to think of the problem from a Bayesian perspective, and where the probabilities have become known is when the Bayesian updates have become very precise.
However, all this talk of probability theory may itself be wrong. Quantum mechanics derives probabilities or distributions or spectra, but it really is a theory of amplitudes or the density matrix. The probabilities come with modulus square or the trace over the density matrix. Framing QM around an interpretation of probability may be wrong headed to begin with.
But if it's not just mathematics, there has to be some way to make contact with experiment...which for probabilistic predictions usually means frequencies.
Brent
--
The argument by Carroll and Sebens, using a concept of the wave function as an update mechanism, is somewhat Bayesian.
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
I could I suppose talk to Fuchs about this. He regards QM as having this uncertainty principle, which we can only infer probabilities with a large number of experiments where upon we update Bayesian priors. Of course a frequentists, or a system based on relative frequencies, would say we just make lots of measurements and use that as a sample space. In the end either way works because QM appears to be a pure system that derives probabilities. In other words, since outcome occur acausally or spontaneously there are not meddlesome issues of incomplete knowledge. Because of this, and I have pointed it out, the two perspective end up being largely equivalent.
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to the wave function as a similar device for a ψ-ontological interpretation.
I do though agree if there is a proof for the Born rule that is may not depend on some particular quantum interpretation. If the Born rule is some unprovable postulate then it would seem plausible that any sufficiently strong quantum interpretation may prove the Born rule or provide the ancillary axiomatic structure necessary for such a proof. In other words maybe quantum interpretations are essentially unprovable physical axioms that if sufficiently string provide a proof of the Born rule.
I would agree that the Born rule is unlikely to be provable within some model of quantum mechanics -- particularly if that model is deterministic, as is many-worlds. The mistake that advocates of many-worlds are making is to try and graft probabilities, and the Born rule, on to a non-probabilistic model. That endeavour is bound to fail. (In fact, many have given up on trying to incorporate any idea of 'uncertainty' into their model -- this is what is known as the "fission program".) One of the major problems people like Deutsch, Carroll, and Wallace encounter is trying to reconcile Everett with David Lewis's "Principal Principle", which is the rule that one should align one's personal subjective degrees of belief with the objective probabilities. When these people essentially deny the existence of objective probabilities, they have trouble reconciling subjective beliefs with anything at all.
Bruce
Maybe a part of the issue here is the role of counter-factual statements. I think that may be more of an issue here than with the objective vs subjective perspective on QM. Most interpretations of QM are not counter-factual definite. In fact the only one I think is CF definite is deBroglie-Bohm. The Lewis idea was that CF statements are modal logical and thus there exists these alternative worlds. I am not sure to what degree MWI upholds that idea. However, that is also not necessarily an argument against the idea of these branching worlds.
LC
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
<<Quantum mechanics computes probabilities not according to some idea of incomplete knowledge>>
“One may call these uncertainties [i.e. the Born probabilities] objective, in that they are simply a consequence of the fact that we describe the experiment in terms of classical physics; they do not depend in detail on the observer. One may call them subjective, in that they reflect our incomplete knowledge of the world.” (Heisenberg)
On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 5/17/2020 3:31 AM, Bruno Marchal wrote:
On 17 May 2020, at 11:39, 'scerir' via Everything List <everyth...@googlegroups.com> wrote:
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
That is the frequency approach to probability. Strictly speaking it is false, as it gives the wrong results for the “non normal history” (normal in the sense of Gauss). But it works retire well in the normal world (sorry for being tautological).
At its antipode, there is the bayesian “subjective probabilities”, which makes sense when complete information is available . So it does not make sense in many practical situation.
Remark: the expression “subjective probabilities” is used technically for this Bayesian approach, and is quite different from the first person indeterminacy that Everett call “subjective probabilities”. The “subjective probabilities” of Everett are “objective probabilities”, and can be defined trough a frequency operator in the limit.
That's questionable. For the frequencies to be correct the splitting must the uneven. But there's nothing in the Schoedinger evolution to produce this. If there are two eigenvalues and the Born probabilities are 0.5 and 0.5 then it works fine. But it the Born probabilities are 0.501 and 0.499 then there must be a thousand new worlds, yet the Schroedinger equation still only predicts two outcomes.
The SWE predicts two fist person outcomes, OK. But the “number” of worlds, or of histories, depends on the metaphysical assumptions.
With mechanism it is a bit hard to not see the physical multiverse
as a confirmation of the many-computations (many = 2^aleph_0 at least!) theorem in (meta)-arithmetic (that is not an interpretation).
Bruno
Brent
The same occur in arithmetic, where the subjective (first person) probabilities are objective (they obey objective, sharable, laws).
Naïve many-worlds view are not sustainable, but there is no problem with consistent histories, and 0 worlds.
Bruno
Bruce wrote:
It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I consider probabilities to be intrinsic properties -- not further analysable. In other words, I favour a propensity interpretation. Relative frequencies are the way we generally measure probabilities, but they do not define them.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/51b10acb-1a60-cf89-f057-61ea55d2d35e%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/91E54408-7199-416B-94B4-56256168122D%40ulb.ac.be.
Probability and statistics are in part an empirical subject. This is not pure mathematics, and it is one reason why there is no single foundation. It would be as if linear algebra or any area of mathematics had two competing axiomatic foundation. There are also subdivisions in the frequentist and subjectivist camps, particularly the first of these.
Quantum mechanics computes probabilities not according to some idea of incomplete knowledge, but according to amplitudes. There is no lack of information from the perspective of the observer or analyst, but it is intrinsic. From what I can see this means it is irrelevant whether one adopts a Bayesian or frequentist perspective. The division between these two approaches to probability correlate somewhat with interpretations of QM that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with Bayesian). The divisions between the two camps amongst statisticians is amazingly sharp and almost hostile. I think the issues is somewhat irrelevant.
On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:
On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:
There is nothing wrong formally with what you argue. I would though say this is not entirely the Born rule. The Born rule connects eigenvalues with the probabilities of a wave function. For quantum state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys
⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
Your argument has a tight fit with this for O_i = ρ_{ii}.
The difficulty in part stems from the fact we keep using standard ideas of probability to understand quantum physics, which is more fundamentally about amplitudes which give probabilities, but are not probabilities. Your argument is very frequentist.
I can see why you might think this, but it is actually not the case. My main point is to reject subjectivist notions of probability: probabilities in QM are clearly objective -- there is an objective decay rate (or half-life) for any radioactive nucleus; there is a clearly objective probability for that spin to be measured up rather than down in a Stern-Gerlach magnet; and so on.
Objective probabilities are frequentism.
No necessarily. Objective probabilities may be based on symmetries and the principle of insufficient reason. I agree with Bruce; just because you measure a probability with frequency, that doesn't imply it must be based on frequentism.That is not what I meant. Bruce does sound as if he is appealing to an objective basis for probability based on the frequency of occurrences of events. I am not arguing this isy wrong, but rather that this is an interpretation of probability.
That's what motivated the MWI in the first place. Now, the MWI may not be
exactly correct as QM may itself only be an approximation to a more
fundamental theory. But the approach of trying to address
interpretational problems by relying on macroscopic concepts is a priori
doomed to fail
On 18 May 2020, at 21:38, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 5/18/2020 3:29 AM, Bruno Marchal wrote:
On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 5/17/2020 3:31 AM, Bruno Marchal wrote:
On 17 May 2020, at 11:39, 'scerir' via Everything List <everyth...@googlegroups.com> wrote:
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that probability is 'the expectation value of the relative frequency'.
That is the frequency approach to probability. Strictly speaking it is false, as it gives the wrong results for the “non normal history” (normal in the sense of Gauss). But it works retire well in the normal world (sorry for being tautological).
At its antipode, there is the bayesian “subjective probabilities”, which makes sense when complete information is available . So it does not make sense in many practical situation.
Remark: the expression “subjective probabilities” is used technically for this Bayesian approach, and is quite different from the first person indeterminacy that Everett call “subjective probabilities”. The “subjective probabilities” of Everett are “objective probabilities”, and can be defined trough a frequency operator in the limit.
That's questionable. For the frequencies to be correct the splitting must the uneven. But there's nothing in the Schoedinger evolution to produce this. If there are two eigenvalues and the Born probabilities are 0.5 and 0.5 then it works fine. But it the Born probabilities are 0.501 and 0.499 then there must be a thousand new worlds, yet the Schroedinger equation still only predicts two outcomes.
The SWE predicts two fist person outcomes, OK. But the “number” of worlds, or of histories, depends on the metaphysical assumptions.
With mechanism it is a bit hard to not see the physical multiverse
Nobody sees the physical multiverse. It's as much a theoretical construct as arithmetic is.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/235effda-bea3-8beb-b867-cbcd55cb44a7%40verizon.net.