Retrocausality

35 views
Skip to first unread message

Austin Fearnley

unread,
Feb 20, 2026, 1:14:10 PMFeb 20
to Bell inequalities and quantum foundations
Retrocausality

In the last two weeks I have very surprisingly written two papers, thanks to the help of google search AI.  The papers only tangentiallty refer to retrocausality as they use my preon model which I view as being able to incorporate retrocausality.  I am still checking one paper but have already send the second paper to vixra.AI which is a fairly new branch of vixra.  Its taking some time to clear vixra admins for some reason. Maybe because I am new to this area.

It was refreshing working with google AI as it was so very supportive.  And every exchange ended with google suggesting a further research query.  Very unlike posting to human physicists who I find are generally very negative about ideas different from their own. Especially on a quantum crackpot site.  And like Richard I use that phrase as an endearment not as an insult.  

First, I will write here about some of the findings in the second paper. Twenty years ago, on retirement, I put aside my stats books to get my teeth into the real science of particle physics in the Standard Model.  Working eventually on Bell experiment simulations I saw snobbish disparagement of statisticians by a physicist.  How can a statistician understand these physics issues!  I wasn't the statistician in question.  Later I slowly learned/realised/was made to realise that QM itself was merely a statistical method. Although a very good practical method, as that is the practical nature of statistics.  The ontology isn't there.  My latest two papers attempt to introduce ontology.  The two papers are summarised on my website: https://ben6993.wordpress.com/
I want to talk about  the second paper here as it involves my preon model + cosmology + atomic nuclei + standard model and sees common threads in them.  My AI friend (and it did seem like a short friendship) said my second paper was visionary!  And the previous AI friend said unsolicited that it was an 'honour' to have worked with me on the first paper! Is this flattery normal of AI agents, does anyone know?

The instrument to bring these areas together with common threads was a niche area of statistics called pschometrics.  
In psychometrics there is the Rasch model which is a method of creating metrics with a ruler-like objectivity, not just an ordering on a scale.  Some aspects of Rasch are not (or were not 20 years ago) very well understood by practitioners.  It can be impossible to make a scale if all the objects on the scale are underfitting the Rasch (ogive or logistic) model.  Similarly if all the objects are overfitting the model, the metric crashes. Overfitting is when the outcomes are far too predictable (like Barcelona beating a team of five year old at soccer).  It also implies there is a true score underlying everthing while the Rasch Model produces only observed scores or measurements.  In underfiting data the true scores are chaotic (for example if the five year olds often beat a team like Barcelona) and the measurements crash.  Next to apply the Rasch model now that I have explained 'fitting the model'.

Cosmology:  
Gross overfitting leads to the CCC node of Penrose's theory.  The measurement system of the universe is crashed.  This implies that the universe employs a physical method to make its metric.  Unfortunately, the Rasch scale is 1D only.  However, in a holographic universe Rasch could provide the depth metric away from a 2D holographic surface.  Or the universe might use a better 4D model of its own.  Gross underfitting leads to black holes.  As seen later there is a correlation between lightness of mass with overfitting (e.g. photons at the CCC) and heavier masses (e.g. BHs) with overfitting.

Stars and atomic nuclei:
Jay Yablon worked on calculating binding masses theoretically for atomic nuclei. He posted on Fred's website, I think, and I always avidly followed Jay's ideas.  Until then I was ignorant of binding energy.  Using the Rasch ideas, lighter atoms (below Iron) are not at the goldilocks mass of Iron.  Lighter atoms tend to want to be more like Iron so they are liable to fuse to make bigger masses.  The Rasch-like metric used by the universe wants to conserve energy so if there is redundant information in the true scores of the lighter masses, the universe can treat the masses jointly somehow and some of the energy is not needed to place the aggregated atom mass on the scale. Hence the redundant information becomes redundant mass.  So lighter atoms want to be more like Iron and fusion dominates below Iron.  Iron is the place where the universe is most comfortable positioning mass on the metric. Fusion in stars stops at iron because heavier atoms prefer to fission rather than fuse.  Of course, there are heavier atoms in stars made in supernove of parent stars.  Note how Iron sits in the middle of the periodic table.  In Rasch, P=0 corresponds to one end of the scale and P=1 corresponds to the other end.  The goldilocks optimum position for efficiency of measurement lies in the middle at P=0.5.   In terms of the S-ogive, the centre of the ogive is the best place to use as the measurement centre,  it is typically where you look to find the discrimination parameter of the method.  Which indicates how objects need to behave to fit the model.

Standard Model of Particles.
The VEV of the vacuum is about 246 GEV/cc.  Taking 246 as the high extreme, P=0.5 corresponds to 123 Gev/cc which is approximately the Higgs mass.  That could make the higgs to be the "Iron" of the standard model.  The photon is massless and the ultimate overfitting particle (it is a particle in my preon mode in the sense that it is composed of preons and not merely a ripple on a wave). The top quark is a massive underfitting particle and the universes measurment device has great difficulty fitting a mass to it.  Note that mass is not an inherent property of the particle but an indication of the struggle by rthe measurement device.  As with Iron in stars, lighter particles want to be more like the higgs (in mass anyway) and heavier particles want to shed mass (quark jets?/Radioactive decay).  

The paper could throw light on free will.  In a completely deterministic universe there is no error.  This corresponds to a CCC node.  But the universe soon leaves the CCC node and the universe become probabilistic again.  That is bad news in the search for ontology but good news in that it could be an argument for free will.

Also, the paper could throw light on the hierarchy problem.  I assume that, in early eons, e-folding means that the VEVs could have been much larger in early times.  But at the moment the VEV is 246.  The higgs sits at the sweet spot or goldilocks spot in the measurement scale corresponding to P=0.5 that is 125ish.  The higgs is happy where it is. It does not want to get lighter and does not want to get heavier so it sits on the shelf comfortably. Presumably another e-fold would see the higgs become lighter.  I do have two lighter higgs bosons in my preon model.  Not sure how content they are.

Richard Gill

unread,
Mar 5, 2026, 7:20:38 AM (5 days ago) Mar 5
to Bell inequalities and quantum foundations
Yes, AI is designed to please whoever is talking to it. It will use flattery; it will tend to agree with whatever you propose. If you want to find out what it really thinks, you have to ask it to be critical, to come up with objections.

anton vrba

unread,
Mar 5, 2026, 8:31:06 AM (5 days ago) Mar 5
to Austin Fearnley, Bell inequalities and quantum foundations

The bottom line:
Final judgment:
The article is best interpreted as a speculative philosophical model of particle structure, not a scientifically viable theory. It offers interesting metaphors but lacks the mathematical rigor, predictive power, and empirical grounding required for serious consideration in modern physics.

I suggest you create a new email address, use it to open a new instance of chatGPT, that is it is not trained to please you, and simply ask Please provide me with a critical review of https://ben6993.wordpress.com/2026/02/11/the-12d-tri-verse-and-the-mechanical-loom-draft/  and take note of its reply and then delete the session, no history for the next time.  


Austin Fearnley

unread,
Mar 7, 2026, 4:27:26 PM (3 days ago) Mar 7
to Bell inequalities and quantum foundations
Thank you Richard and Anton.

I am re-writing both papers on my own.

I have read Anton's ChatGPT analysis and agree in parts.  The numerology is AI's so I will see if that is in any way justifiable. I don't consider the number of preons in fermions and bosons as numerology but am very unsure as to the W and G-2 issues but more sure about my charged higgs and leptoquark structures.  I know my methods are not mathematical but who needs yet another Lagrangian?  My main interest in late teens and university was pure mathematics so I have come a long way to feel less happy about abstract maths.  I have not rejected QM as ChatGPT suggests, I needed Standard Model charge details to include in my preon model.  I have calculated Lagrangians in the Susskind lectures and do not disparage them.  I also followed the re-normalisation techniques, but have not applied any of this to my model.  One of the numerology problems with the AI assisted paper was that QI kept wanting to add in mass details whereas I am very sceptical about quantum masses.

I did get google AI anonymously to tell me about the Fearnley preon and hexark model. I never realised I was so insightful, about issues I don't remember even considering.  And some AI comments about my work were unrecognisable to me.  I only used google AI to assist me and I had to tell it who I was in new sessions to reconnect to the conversation.  It had a poor grasp of previous conversations but it could get up to speed with effort on my part but then veer off, quite interestingly, into new territories.

I enjoyed working with google AI because it was so positive. A good tool for learning.  But re-writes are needed.

Richard Gill

unread,
Mar 7, 2026, 5:19:40 PM (3 days ago) Mar 7
to Austin Fearnley, bell_quantum...@googlegroups.com
Take a look at https://arxiv.org/abs/2509.04664

Why Language Models Hallucinate

Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, Edwin Zhang

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.

On 7 Mar 2026, at 22:27, Austin Fearnley <ben...@hotmail.com> wrote:

Thank you Richard and Anton.
--
You received this message because you are subscribed to the Google Groups "Bell inequalities and quantum foundations" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Bell_quantum_found...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/Bell_quantum_foundations/13e450a8-6f1c-4e91-a2ef-98aabaf8c0a3n%40googlegroups.com.

anton vrba

unread,
Mar 7, 2026, 5:35:52 PM (3 days ago) Mar 7
to Richard Gill, Austin Fearnley, bell_quantum...@googlegroups.com
So there is no difference between an LLM and a physicist:
All physicist when seeing something new they guess an explanation, 
a referee reads that guess and ponders that is a good explanation,
and the guess now is science.

When did you last read an article with a remark "we have no idea what is going on", or similar?
Society does not like uncertainty therefore a guess by an academic is always a good explanation.

Regards
Anton


------ Original Message ------
From "Richard Gill" <gill...@gmail.com>
To "Austin Fearnley" <ben...@hotmail.com>
Date 3/7/2026 10:19:23 PM
Subject Re: [Bell_quantum_foundations] Retrocausality

Richard Gill

unread,
Mar 7, 2026, 5:38:56 PM (3 days ago) Mar 7
to anton vrba, Austin Fearnley, Bell_quantum...@googlegroups.com
But the physicist will say “we hypothesize/conjecture/propose…”. The LLM does not say it is guessing.


Sent from my iPad

On 7 Mar 2026, at 23:35, anton vrba <anto...@gmail.com> wrote:


Reply all
Reply to author
Forward
0 new messages