Malus' law photon by photon

36 views
Skip to first unread message

pierrel5556

unread,
Dec 17, 2021, 7:02:50 AM12/17/21
to Bell inequalities and quantum foundations
Hello everyone,

I wrote a new version of the local polarizer simulating algorithm, which I used in my epr simulations.

It now makes it possible to produce Malus' law photon by photon through a series of polarizers without using a random component. (RNG)

That is to say that individually the output taken by each photon at the level of each polarizer depends only on the two local variables that I use, and a deterministic calculation with the angle of the polarizer.

But a population of photons statistically produces Malus' law.

This means that the hazard, or pseudo random is not necessary to produce this law.

Is it something new?
I am not sure what is established on this issue.

The method is explained at the beginning of on this page  http://pierrel5.free.fr/physique/pol2/pol2_doc_en.htm

Thank you for your answers.
Pierre

Richard Gill

unread,
Dec 17, 2021, 7:20:48 AM12/17/21
to pierrel5556, Bell inequalities and quantum foundations
Dear Pierre

You write "This document shows the performance that a local model can produce in an EPR experiment, so that the results can be compared with the QM model. He [It?] also shows that it is possible to produce Malus' law photon by photon without involving a random component, or a superposition of states."

But we already know that it is possible to produce Malus' law photon by photon in a deterministic way. There are very many ways to do it. 

We also know (at least I do, as a mathematician) that it is not possible to do it f certain constraints are imposed.

Could you rewrite your program so that the flow of computations is as follows:

Repeat many times:

- information goes from "source" to two "detectors"
- a setting goes to each "detector" (I am allowed to supply whatever settings I like - you have no control of the settings). [Think of them as whole numbers of degrees from 0 to 360.]
- each "detector" computes and outputs a binary outcome (+/-1) (function of input from source and setting)

Collect just those "repeats" where the settings were a, b and compute average value of product of outcomes when settings were a, b.

You must violate one or more of the "rules" which I just listed. Which one did you have to break? You tell us!

Richard





--
You received this message because you are subscribed to the Google Groups "Bell inequalities and quantum foundations" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Bell_quantum_found...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/Bell_quantum_foundations/01b242bd-3bbe-474e-af98-bf93996a7c19n%40googlegroups.com.

Chantal Roth

unread,
Dec 17, 2021, 8:43:05 AM12/17/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
Hi Pierre,

This is awesome! I see several new ideas in there that we have not discussed before as far as I remember (and we have talked about this for a LONG time :-).

Richard, did you read the entire page? There are multiple experiments, also relating to Eberhard, where he for instance shows how a negative J be reached.

Richard, yes we know that the program you suggest will not break the inequality, believe it or not, we actually know that  :-). But it is not that simple. A realistic computer simulation must consider other elements too, like computation of the correlation and also drift over time...

For instance, see the last section on double pair receptions on the page (note this is also a local model):
"We see in this example that for a value of J/N < -0.0001, the sign of J/N becomes significant for N > 35,000,000 and no longer takes up a positive value, effectively indicating a signalling between Alice and Bob."

Best wishes,
Chantal

Richard Gill

unread,
Dec 17, 2021, 11:45:30 AM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
Chantal, 

A realistic simulation of an actually done experiment must also simulate the actual experimental design. 

The loophole-free experiments made their case (in part) by using the martingale based test-statistic which I proposed more than 20 years ago, for very good reasons.

Two of them used certain ideas of Eberhard’s (choice of states and measurements) but they did not need Eberhard’s variant of CHSH.

I’m not interested in simulation models which reproduce the results of experiments with known and in principle avoidable defects. We already know it can be done, and how!

Richard

Sent from my iPhone

On 17 Dec 2021, at 14:43, Chantal Roth <cr...@nobilitas.com> wrote:



Chantal Roth

unread,
Dec 17, 2021, 12:16:13 PM12/17/21
to Richard Gill, 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,

So how would the Martingale test detect the double pair receptions as describe in the page?
Do you have an R script where Pierre for instance could plug in his numbers that would show that?

And, I don't remember exactly, was the Martingale test run on the actual Guistina raw data?
If I remember it correctly, we talked about this, and it was not actually run on the raw data? (It was assumed to be martingale)

Best wishes,
Chantal

Richard Gill

unread,
Dec 17, 2021, 12:25:16 PM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
Chantal, 

The martingale test is applied in an experiment in which the experimental unit is  “pair of time-slots” and the experimental protocol strictly enforces that each pair of time slots has two binary inputs (setting choices) and produces two binary outputs. The inputs should be fair coin tosses.

It’s a *test*. A test of local realism. I have no idea what you mean by “double pair receptions”.

I agree that the 2015 experiments were not the last word. They have known defects which in principle can be avoided, and probably right now several groups are re-doing the experiments.

Richard

Sent from my iPad

On 17 Dec 2021, at 18:16, Chantal Roth <cr...@nobilitas.com> wrote:



Chantal Roth

unread,
Dec 17, 2021, 12:31:37 PM12/17/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,

Agree on the first pat.

Pierre's "double pair reception" is described at the bottom of this  page http://pierrel5.free.fr/physique/pol2/pol2_doc_en.htm 

I think the martingale test would not "detect" that, as it has nothing to do with setting choices.

Here is the relevant  text:

Signalling.

With the local model, the detection of the photon is dependent on the value | rp |, which itself depends on the angle of the local polarizer.

When detecting pairs, a form of signalling can occur if it is assumed that the detectors are receiving a non-zero proportion of double pairs.

It seems possible to assume that the crystal of the parametric source does not send a unique pair on each emission request, and that the collection/detection areas, however small, can receive more than one pair.

The detections being dependent on the angles of the detectors, the situations producing uu measurements (no detection) are then not random.

If double reception occurs, and the first pair produces uu, the second pair can replace the first, selectively depending on the angle difference between the polarizers.

This replacement is then undetectable in the measurements.

However, this effect does not make it possible to produce an inequality violation, since the double emissions also produce errors when at least one of the photons of the first pair is detected. For example this can combine, on a two detector experiment, the oe+eo pairs into a false oo pair.

With the local model simulated here, and a two detector experiment, the two effects compensate each other and the value of J/N remains around 0 if double detections produced on the same detector are counted as a single detection.
However, it is necessary not to repeat these measurements and normalize the detection counters as can be done with uu measurements, as this unbalances the compensation mechanism and produces a stable violation of the inequality.

The following graph shows the intensity of violation produced if these measurements are not counted as single detections.

.
X axis: Rate of double pairs received on the detectors. [0..2%]
Y axis:
- acc.r: Rate of double detections on the same detector, generally called "accidentals"
- d uu: Rate of variation of measurements uu, caused by the replacement by of the first measurement by a second.
- J/N: Amplitude of the violation produced on the inequality.

We can notice that the effect becomes sensitive with a weak rate of double pairs.

With a 4 detector experiment, it is possible to detect the abnormal measurements produced by a detection on the o and e detectors of the same arm.
These measurements should be counted as two single detections to produce compensation.

Note that on experiments with two detectors, because of the reactivation time of the detectors ("blind time"), the double receptions whose time difference is less than this time go unnoticed, because only the first can be detected.

 

Validation of stability of J/N.

By intentionally simulating double pair receptions and normalizing them as uu measurements, it is possible to produce artificial signalling and stable inequality violation, with adjustable intensity.

This makes it possible to evaluate the value N necessary so that the sign of the result of the inequality is no longer dependent on stochastic variations and becomes stable.
This makes it possible to define a minimum value N making it possible to validate a violation amplitude.

The following graph simulates a double detection rate of 0.003 with intentional normalization to produce a violation.


X axis: N
Y axis: J/N

We see in this example that for a value of J/N < -0.0001, the sign of J/N becomes significant for N > 35,000,000 and no longer takes up a positive value, effectively indicating a signalling between Alice and Bob.

This shows that it is necessary, in order to validate a result of low intensity, to use a sufficiently high value of N.

 

Conclusion:

This document shows the performance that a local model can produce in an EPR experiment.

It shows that the local model can produce a violation of Eberhard's inequality with a probability of 1/2 regardless of the detection rate and the value of N used for the test.

It shows that a violation can only be confirmed by the couple (Intensity of the violation, N value used for the test).
A value of N too small for a given J/N amplitude making the result insignificant.

It also shows the importance of counting detections that cannot be interpreted as simple detections.

Finally, it shows that the operation of the polarizer can be fully deterministic to produce Malus’s law photon by photon, without requiring a random source.


Best wishes,
Chantal

Richard Gill

unread,
Dec 17, 2021, 12:49:43 PM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
The martingale test assumes we have deliberately converted multiple detections into one binary outcome. The experiments engineer some function of what is recorded by each detector during one time slot. It must take values -1 or +1. Or if you prefer, 0 and 1. It doesn’t matter. They are just labels. What function is engineered is important, but it affects Type 2 error only, not Type 1 error. In statistical testing we talk about Type 1 error and Type 2 error. We control Type 1 error by rigorously imposing the protocol I described. Type 1 error is: rejecting local realism, even though local realism is true. 

The choice of function will influence Type 2 error: deciding that local realism is true even though it isn’t.



Sent from my iPad

Richard Gill

unread,
Dec 17, 2021, 1:02:06 PM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
PS. Pierre’s model is actually an excellent illustration of why you should *not* use the Eberhard test. You should implement good random number generators for the settings and strictly impose Bell’s Bertlmann’s socks protocol (binary inputs and outputs on paired time-slots; no talk of particles). Finally, you should use the martingale test to evaluate the results. It protects you against being misled by spurious correlations caused by the confounding factor “time”.

If you believe that there are no time-effects (drifts, jumps, trends in the physics) you can use classical tests of the CHSH and Eberhard types. There is a whole spectrum of tests you can use. You can even use your data to choose the best one (minimise the variance of your statistic and hence optimise your statistical power).

Sent from my iPad

pierrel5556

unread,
Dec 17, 2021, 1:10:58 PM12/17/21
to Bell inequalities and quantum foundations
For information, the temporal parameter has no influence in my EPR tests since I do not do any windowing.

I simulate a pulsed source and all detections are taken into account.

I only use detection flaws.

GeraldoAlexandreBarbosa

unread,
Dec 17, 2021, 1:26:00 PM12/17/21
to Richard Gill, Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations

A brief comment on double events on the same detector due to two pair emissions occurring in the same time window: The efficiency of one down-conversion event  is (very low) and for two events is (very low)^2. 


Geraldo A. Barbosa, PhD
KeyBITS Encryption Technologies LLC
7309 Gardenview Drive, Elkridge MD 21075 US
E-Mail: Geraldo...@gmail.com
Skype: geraldo.a.barbosa

Cellphone: 1-443-891-7138 (US)
               +55-31-989909882 (Brazil)


Chantal Roth

unread,
Dec 17, 2021, 1:32:07 PM12/17/21
to 'Scott Glancy' via Bell inequalities and quantum foundations

GeraldoAlexandreBarbosa

unread,
Dec 17, 2021, 1:40:30 PM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
Although it depends on many details, you may consider a range of 10^{-6} to 10^{-8} as a rough estimate.


Geraldo A. Barbosa, PhD
KeyBITS Encryption Technologies LLC
7309 Gardenview Drive, Elkridge MD 21075 US
E-Mail: Geraldo...@gmail.com
Skype: geraldo.a.barbosa

Cellphone: 1-443-891-7138 (US)
               +55-31-989909882 (Brazil)

Richard Gill

unread,
Dec 17, 2021, 2:59:40 PM12/17/21
to pierrel5556, Bell inequalities and quantum foundations
Pierre, friends,

If you would use Bell’s rules (the windowing and time slot rules introduced 40 years ago in the Bertlmann’s socks paper), together with completely random settings, and my very simple martingale test of 20 years ago, you would *not* get a spurious violation of Bell’s theorem.

You can test my claim by adapting your own simulation to make it satisfy those rules! Try it!

I repeat: your simulation is a perfect illustration of why Bell formulated those rules, and why I invented the martingale tests, and why the 2015 experimenters tried to implement them!

Hopefully we will soon see experiments which fix the known shortcomings of the 2015 experiments. I suppose that Chantal thinks that will never happen because she believes in local realism. Maybe we should make a bet on that.

It would certainly be interesting to have a vote in this group: will it be done in 2022? What do people think? Or would you bet on 2023? Or do you believe in local realism? If local realism is true, such an experiment will never get published because it eould be almost impossible to get a strongly significant statistical result, because of martingale theory and local realism?

Richard


Sent from my iPhone

pierrel5556

unread,
Dec 17, 2021, 3:44:08 PM12/17/21
to Bell inequalities and quantum foundations
Hello Geraldo,

The parametric conversion rate is very small, but the probability of producing a pair on the detectors depends on the power of the laser pump .

If the pulsed source produce only one pair for 10e6 clock, the detection rate in an experiment would be very tiny.
There would be practically only uu or single measurements.

I think that sources manufacturer optimize power to produce one pair with a reasonable probability, I'm not sure if they care much about the likelihood of producing more.

In addition, it is difficult to assess reliably the rate of double pairs that a source can produce, as this requires a 4-detector setup and the rate must be assessed from the double detections produced on detectors on the same arm.

With two detectors, if the photons are close in time, the detectors can only detect the first one because of the reactivation time.

I have tried to find these technicals informations, but without success.
Do you have any documentation on this point ?

Best Regards
Pierre

Chantal Roth

unread,
Dec 17, 2021, 5:06:59 PM12/17/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,

You claim "... why I invented the martingale tests, and why the 2015 experimenters tried to implement them!"

Could you provide us with a simple R script that demonstrates the martingale test where Pierre can put in the data?
Then you can show us that you are right. But I don't just believe it because you say so.
And the only way to convince you is the same, through such a test provided by yourself.

Science should not be a religion, it doesn't matter what we believe. Personally I just want to know how the world works. But I am very, very skeptical of any theory that appears to be illogical, magical, require too many fudge factors or has "paradoxes", so I will not just give up on local realism unless the proof is absolutely rock solid.

If I had to bet, surely we will get more fancy experiments, but with similar flaws to the existing ones.

Best wishes,
Chantal
> --
> You received this message because you are subscribed to the Google
> Groups "Bell inequalities and quantum foundations" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to Bell_quantum_found...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/Bell_quantum_foundations/F6BF8432-2BAF-47F7-ADFB-A897A9D0E17A%40gmail.com.

Richard Gill

unread,
Dec 17, 2021, 11:54:34 PM12/17/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
The Python or R script is utterly trivial, except for one function which Pierre has to supply. He has almost complete freedom in what he takes it to be. It maps a record of events (or events and values, or a complete signal) at one detector during a given time interval to the set {0, 1}. The only restriction is that it *always* gives an output in {0, 1}. Even if in there was no event at all.

We run another program which creates N time slots and tosses two fair coins for the settings which are to be imposed on each time slot. The settings, like the outcomes, are labelled 0 or 1 but they correspond to two pairs of angles of Pierre’s choice.

Pierre may choose the lengths of the time-slots, just how he likes. Easiest to have them equal length, say ‘tau’, and have them start at time 0. Then he just needs to choose the length of the intervals.

Now he runs his simulation. He gives it the times of resetting of the two detectors, and the values of the setting for each time-slot on each side of the experiment. The simulation now gives a record of what happens at each detector from time 0 to time N tau. In each wing of the experiment, in each time-slot, he applies his function to get a binary outcome.

We end up with two binary settings, two binary outcomes, for each of N trials.

We just count:

# trials with equal outcomes and settings 00, 01 or 10
+ # trials with opposite outcomes and settings 11

According to local realism this number x could be about 0.75 N
According to QM x might be up to about 0.85 N
We will compare it with a binomially distributed rv with parameters N, 0.75

This is what experimenters actually did!!

Sent from my iPad

> On 17 Dec 2021, at 23:07, Chantal Roth <cr...@nobilitas.com> wrote:
>
> Richard,
> To view this discussion on the web visit https://groups.google.com/d/msgid/Bell_quantum_foundations/7d48df86-8d8b-4578-9d8c-866f032beb14%40www.fastmail.com.

Richard Gill

unread,
Dec 18, 2021, 1:52:12 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
PS: Chantal, so you think that Quantum Mechanics is a religion? I think Local Realism is a religion, too. I agree with you we need solid proof against Local Realism. The experiment I described could give rock-solid proof. Applied to Pierre’s virtual “source and detectors" it will certainly show that his set-up adheres to Local Realism. He can try lots of different functions mapping the data collected by a detector during a time slot into {0, 1}, but none of them will work.

Two of the 2015 experiments had much too small “N”. The other two had astronomically large “N” but only got to about 0.750001 and critics have tried to attribute the “0.000001” excess to to time-dependent defects in the random number generators and time-dependent shifts in the physical parameters of the set-up. It *could* have been an artefact due to correlations between: settings and time, and between detection and time. That’s why the experiments have to be done over again. I’m afraid they are not in a hurry to do it because it will not be big news when they simply re-do an already done experiment but just a bit better. The young experimenter who does the hard work in the lab can hardly get a PhD for that. They need to do sexy experiments which get published by Nature and thereby bring in yet more research money to hire more PhD students and buy more expensive apparatus. Aspect, Weihs (?), Hanson all did experiments which people at that time time said would *not* be successful; they persevered despite the doubts of their supervisors. Till Aspect did his experiment almost nobody took any notice of Bell’s work except for a few West coast US hippy physicists.

Chantal Roth

unread,
Dec 18, 2021, 4:57:16 AM12/18/21
to Richard Gill, 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,
QM is not a religion (for the most part). The interpretation thereof is (the Copehagen believers, parallel worlds believers,. etc etc, which ever your favorite is).

I am glad that you admit that the existing experiments are all not bullet proof yet.
I'd be happy to participate in crowdfunding for a rock solid experiment, where ALL the raw data is made available to the public (including "unsuccessful" runs). How about that :-)? We could all chip in the design so that we all agree that the result has to be accepted, no matter the outcome.

Best wishes,
Chantal

Richard Gill

unread,
Dec 18, 2021, 5:19:08 AM12/18/21
to Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
Chantal:

“Local realism” is soft-wired in our brains by evolution. As a model of inanimate nature. We also have built-in to our core systems of thought, a willingness to believe in spirits/gods/devils who can intervene over large distances, essentially instantaneously.

That’s why nobody could believe initially in Newton’s theory of gravity. It seemed to imply action at a distance. God could do that, but not that lump of rock called the moon.

I’m saying that “local realism” as a model of the inanimate world is effectively also a religion. It’s part of the background assumptions of our cultures and our language.

I agree with you that *interpretations* of QM are for the most part optional extras. The empirical predictions are the same.

So, do you agree that according to QM it should not be impossible in the quantum optics lab to get up to about 0.85 N (with N large) in an experiment performed according to my specifications?

We don’t need to have any special “interpretation” in order to come up with that prediction, I think. We just need the core empirical predictions.

Do you agree that the local realistic simulation version which I explained will not be able to get significantly above 0.75 N with large N?

Richard

Sent from my iPad

On 18 Dec 2021, at 10:57, Chantal Roth <cr...@nobilitas.com> wrote:



Chantal Roth

unread,
Dec 18, 2021, 5:46:46 AM12/18/21
to Richard Gill, 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,

Yes, we are all extremely biased, both culturally and biologically. I agree, until proven, local realism is also a religion.

You are probably right about the 0.75 vs 0.85, but before I commit to a yes I need to think about it more (what would be an acceptable value for either side exactly? I mean... what if the result is 0.751 for a given N, how do we compute the probability for that happening exactly, given the inherent errors in any system. What should we expect for a rock solid QM result? 0.83? 0.84? 0.5? What do you think is realistic/reasonable and convincing?)

How about we (I guess... me and Pierre? Although I have been trying really hard to stay out of this :-/... ) write a little web app where you can put in the parameters you say (and functions) so that everyone can try it out?

Best wishes,
Chantal

Richard Gill

unread,
Dec 18, 2021, 6:02:44 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
My goodness Chantal, I’ve already told you hundreds of ttimes exactly how to decide whether 0.751 is good enough!

It depends on N. The answer can be found from probability theory. The binomial distribution. Ever heard of it? Are you a scientist, yes or no?

At N = 10 000 I would draw the line at x = 8 000 =  0.80 N since both parties ought to agree that according to their theories it should be almost impossible to get above or below that bound.

> N <- 1e04
> x <- 0.8 * N
> N
[1] 10000
> x
[1] 8000
> pbinom(x, N, 0.85, lower.tail = TRUE)
[1] 1.915539e-41
> pbinom(x, N, 0.75, lower.tail = FALSE)
[1] 1.155511e-32

In fact, N = 1000 and x = 800 would also be OK! No need for a very big experiment.

> x <- 0.8 * N
> pbinom(x, N, 0.85, lower.tail = TRUE)
[1] 1.22203e-05
> pbinom(x, N, 0.75, lower.tail = FALSE)
[1] 8.029329e-05

Delft and Munich only had N = around 250. And they actually probably only got a success rate of 0.775 or so.  Much too small.

Vienna and NIST had N in the 100 millions but a success rate of only 0.750001 or so, as I already said. Statistically enormously significant but plenty of room for the result to have been got through systematic biases, and we already know what those biases could have been. Time dependent randomisers. Time dependent detectors.

Richard Gill

unread,
Dec 18, 2021, 6:11:54 AM12/18/21
to Chantal Roth, pierrel5556, Bell Inequalities and quantum foundations
Chantal and Pierre: Great idea!!!!

On 18 Dec 2021, at 11:46, Chantal Roth <cr...@nobilitas.com> wrote:

Chantal Roth

unread,
Dec 18, 2021, 6:14:42 AM12/18/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
Richard,
Yes, I know the binomial distribution, and yes I know it depends on N  (I don't see how getting rude about this would be helpful in any way).
Using the binomial distribution alone is an oversimplification, as you point out yourself:

"Statistically enormously significant but plenty of room for the result to have been got through systematic biases, and we already know what those biases could have been. Time dependent randomisers. Time dependent detectors."

We have got to consider those systematic biases in any experiment, we cannot only look at the binomial distribution.
We should try to estimate how much such a systematic (and hard to detect) bias could affect the result.
Can you do that?

Can you compute how much such a (small) time dependency for instance could lower the resulting significance and also quantify that systematic bias?

Only then can we really claim statistical significance for a given N and and given result.
--
You received this message because you are subscribed to the Google Groups "Bell inequalities and quantum foundations" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Bell_quantum_found...@googlegroups.com.

Richard Gill

unread,
Dec 18, 2021, 6:25:23 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
Using the binomial distribution is *not* an oversimplification.

You still haven’t read my 20 year old papers carefully.

Is that a rude thing to say?

I think that friends can criticise one another. 

Richard Gill

unread,
Dec 18, 2021, 6:27:40 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
PS The binomial distribution gives an *exact bound*, as long as the settings are chosen completely at random.

It’s *not* an approximation in that case.

Richard Gill

unread,
Dec 18, 2021, 6:28:58 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
PS I’m not interested in trying to salvage past experiments for one or the other side. I’m interested in tools which will help us evaluate the next generation of experiments.

Chantal Roth

unread,
Dec 18, 2021, 6:37:43 AM12/18/21
to Richard Gill, 'Scott Glancy' via Bell inequalities and quantum foundations
I agree, not in *that* case. 

BUT: This is your sentence: "Statistically enormously significant but plenty of room for the result to have been got through systematic biases, and we already know what those biases could have been. Time dependent randomisers. Time dependent detectors."

You claim at the same time that the binomial distribution is sufficient, yet above that it is not :-).

So don't you agree that it would be good quantify that systematic bias somehow?

By that I mean (maybe it was not clear):
- yes, in the ideal world, if there is no systematic bias anywhere, the binomial distribution works just fine
-.but in any real experiment, we have to expect some systematic bias.
- either we can detect that bias (and so compute its effect), or we cannot
- let's assume there is systematic bias, not that obvious, so maybe we don't see it right away
- can we quantify somehow the effect of such a small systematic bias on the statistical significance of the result?

Chantal

Richard Gill

unread,
Dec 18, 2021, 6:40:39 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
Wrong.

I claim that the binomial distribution would have been fine IF the experimenters had used better randomisers.

I am not contradicting myself. You are not reading what I say carefully enough.

Chantal Roth

unread,
Dec 18, 2021, 6:44:56 AM12/18/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
Ok :-). That IF is important :-). 

My point is: even in a future experiment, we have to be very careful about that IF.
We have to expect SOME small , even unknown,  bias.
No real experiment is perfect, there is no such thing.

If we can quantify that (hopefully small) effect, the result would be much more convincing.

Best wishes,
Chantal

Richard Gill

unread,
Dec 18, 2021, 6:50:14 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
PS the answer is easy: create your random settings in advance using a state of the art pseudo-random generator. It was a mistake to create random settings using physical noise. It was a bigger mistake to use quantum noise. It also did not help by using signals from distant quasars from the dawn of time, since the signals were detected *now* and have to reprocessed *now* in order to get random bit strings.

The Munich and Delft experiments have promise. They just need to get about 10 or 20 times bigger!!!! The NIST and Vienna experiments used Eberhard’s idea that a nearly not entangled state might actually show quantum non locality better in the presence of imperfect detectors than a maximally entangled state. That was absolutely genial of Eberhard. And a fantastic experiment in Vienna and at NIST. They were working at the boundary of what could be done with a classical photons only experiment with the photo-detectorsof 2015. We are now 6 years further!!!




On 18 Dec 2021, at 12:37, Chantal Roth <cr...@nobilitas.com> wrote:

Richard Gill

unread,
Dec 18, 2021, 7:07:18 AM12/18/21
to Chantal Roth, Bell Inequalities and quantum foundations
I agree. Therefore let’s develop the Delft/Munich style experiment with S = 2.4 approx and with photons going from Alice and from Bob to Casper. A three-party experiment. I think the classic two party type experiment has had its day.

Chantal Roth

unread,
Dec 18, 2021, 8:40:25 AM12/18/21
to 'Scott Glancy' via Bell inequalities and quantum foundations
re "I think the classic two party type experiment has had its day."

True, but keeping it the same helps people understand it.
By now, many people are familiar with it and know what to expect.
If we change it, it will take forever to get everyone on he same page again.
I'd vote for simplicity... 

Best wishes,
Chantal

GeraldoAlexandreBarbosa

unread,
Dec 18, 2021, 1:10:35 PM12/18/21
to Chantal Roth, Richard Gill, 'Scott Glancy' via Bell inequalities and quantum foundations
Richard usually makes good points about statistics. However, he likes pseudo-random generators (= deterministic generators) that are biased by definition (they have formation rules). I understand that they can be practical to use, but is practicality a good quality if you are aiming to describe "truth" in the most fundamental terms?

(Yes, I am just provoking him)

Richard Gill

unread,
Dec 18, 2021, 1:45:37 PM12/18/21
to GeraldoAlexandreBarbosa, Chantal Roth, 'Scott Glancy' via Bell inequalities and quantum foundations
Pseudo-random generators are not “biased by definition”. They are designed to expand randomness. You start with a random seed. A relatively short sequence of random bits. The generator converts it into a much longer sequence which can’t be distinguished from truly random, without enormous amounts of computation.

Monte-Carlo simulations are used everywhere in modern science. There is enormous experience in how to reliably generate huge amounts of effective randomness.

Sent from my iPad

On 18 Dec 2021, at 19:10, GeraldoAlexandreBarbosa <geraldo...@gmail.com> wrote:



pierrel5556

unread,
Dec 19, 2021, 3:28:11 AM12/19/21
to Bell inequalities and quantum foundations
I don't believe that local realism is a religion.

For me this is the most likely option.
I don't think logic and information constraints are negotiable options.
You have to have done a lot of programming with emerging bugs to think that.

It is not a very pleasant view, but if it is the reality I prefer to know it.

Inge Svein Helland

unread,
Dec 19, 2021, 4:02:46 AM12/19/21
to pierrel5556, Bell inequalities and quantum foundations

Local realism is not a religion. It is a possible basis for trying to understand the world. It was the basis for Albert Einstein, and in modern time it has been the basis for Lee Smolin when writing his books, for instance look at 'Einstein's Unfinished Revolution. The Search for What Lies Beyond the Quantum'. Lee Smolin is willing to sacrifice the whole of quantum theory and look for a new theory, in order to save his view on realism.


Myself, I prefer to keep quantum theory, but look for a simpler basis which also people outside the community can understand. I have discussed the Bell theorem in that connection, and arrived at the conclusion that in order to understand the results from experiments and from quantum theory on the Bell experiments, we must admit that we all may be limited in some way when making decisions. Not too surprising, but perhaps also not too easy to swallow for some people. The whole discussion here relies on a theorem from my recently revised book, a theorem that I really should wish that a qualified mathematician should look through my proof of.


My paper on the Bell experiments has been submitted to Foundations of Physics; I am still waiting for the referee reports. If somebody wants to see the manuscript, I can send them a copy on e-mail (in...@math.uio.no).


Inge


From: bell_quantum...@googlegroups.com <bell_quantum...@googlegroups.com> on behalf of pierrel5556 <pierr...@gmail.com>
Sent: 19 December 2021 09:28:11
To: Bell inequalities and quantum foundations
Subject: Re: [Bell_quantum_foundations] Malus' law photon by photon
 

Richard Gill

unread,
Dec 19, 2021, 4:46:01 AM12/19/21
to Inge Svein Helland, pierrel5556, Bell Inequalities and quantum foundations
… and other people embrace super-determinism, and others embrace retro-causality, in order to hold onto that basis (local realism) for understanding the world

For instance: the streams of random numbers which I use to determine the settings in my Bell experiment were predetermined at the dawn of time … or post-determined at the end of time. Together with the fact that we will do / did do the experiment at a particular time and place. It was also pre-ordained or, I backwards time, post-ordained. That explains how Bell’s inequality gets violated in a completely deterministic and local way

I don’t find that theory attractive but I’m not a physicist so I have not been conditioned (a) to believe in determinism (b) to believe in time-reversibility.

sgl...@nist.gov

unread,
Dec 22, 2021, 4:03:11 PM12/22/21
to Bell inequalities and quantum foundations
Richard,

I'm sorry to be a little late to this discussion, but I have a few comments about the random number generators used in recent Bell tests.

Even if the random number generators are biased, one can factor the bias in to the Bell Inequality (and the non-IID Martingale-based statistical test) so that the test is still rigorous in spite of the bias.  At NIST and in the other recent Bell tests, we did correct the tests using very conservative bounds on the bias.  Of course it is possible, in principle, that the bias in the experiments was stronger than it was during calibrations.

We also performed tests in which we added unbiased pseudorandom bits to the outputs of the physical RNGs.  You can find information about that in NIST's 2015 paper.  Although the physical random bits are a little biased, we did experiments in which the actual measurement settings applied were unbiased.  We hoped that this procedure would satisfy people who are worried about the "free choice" loophole AND people who are worried about bias in the RNGs (that might exceed the bias measured in calibration).

Some of the recent device-independent random number generation experiments, for example from USTC in Shanghi, also report loophole-free Bell Inequality violation using pseudo-random choices, and I believe this is also true for some of the new device-independent quantum key distribution experiments.  (Most of the quantum information community is no longer publishing papers titled "Loophole-Free Bell Test", but they are still using loophole-free settings to do things like secure randomness generation and key distribution, so anyone who wants to keep up with all Bell-related experiments, should also be watching for experiments that use Bell Tests to accomplish these other tasks.)

Scott
Reply all
Reply to author
Forward
0 new messages