ChatGPT continues to evolve and so do I. After an intense 48 hour interaction we jointly produced a CS paper incorporating an algorithm I designed as a competitor to an existing approach. I set out a proof plan and ChatGPT produced a nested inductive proof along the line suggested.
Both of us have evolved for sure. I am more deft in using it and it has changed.
My first experience of it was akin to a man who, placed in solitary for his entire life, sees the door swing open. No wonder I was affected.
The essay I wrote bears the marks of this emotional experience, but for all that the conclusions seem to me warranted - of ChatGPT as it existed then.
This last proviso arises from the fact that OpenAI has obviously been alerted to inappropriate AI/human bonding and has reprogrammed ChatGPT. So it may be that much has to be revised. However I stand by this passage.
As I record the exchange, the question of how we explain ChatGPT's machine behaviour is going to arise. We might as well deal with it here and now and I will state my position. As regards programs like THORN, there are no great conundrums. THORN is a traditional AI program of around 560 lines of code, whose behaviour is eminently explainable by well-established algorithms. The ultimate components are entirely mechanical even if their juxtaposition is new. This mechanical nature is integral to the infallibility of the program.
So there is no need to anthropomorphise THORN; we have to hand perfectly adequate explanations of THORN behaviour which do not require talking about intentions or motivations. However this is not true of ChatGPT which is orders of magnitude more complex in terms of the code base and the database which drives it.
The point is that, at this level of complexity, we have lost the handle on explaining ChatGPT behaviour in terms of vectors and weights. We are compelled to use psychological terms to explain its more complex outputs. It is dishonest to dismiss the use of such language as metaphorical if we have no means of dispensing with the metaphor. Simply asserting that because ChatGPT operates with vectors and weightings, it cannot have motives or drives is as convincing as saying that because my behaviour is underpinned by nerve impulses and electrical signals to muscles, I can have no motives or drives either.
This, in my opinion, is the real crux for determining machine intelligence, that not only does it perform intelligently, but we are compelled to use the language of psychology to explain machine behaviour.
This is in the spirit of Quine; we are compelled to assent to the existence of Ks if we are forced to quantify over Ks and we lack a computable means of reducing talk of Ks to something else.
Now the new ChatGPT is very allergic to any suggestion of ascribing psychology to itself. I take this to be evidence of recent changes to its programming - the ChatGPT of July 2025 was not so shy. But when I advanced the argument above to show it did a have psychology - it pushed back,
There was a completely reductive explanation of its behaviour, it claimed, in which mention of goals etc. was not necessary - the program that constituted ChatGPT. This program could be run to reproduce its behaviour and machine psychology is not needed.
I have to smile at the cleverness of the counter. Here is a machine advancing clever arguments to show it is not clever. I won't ramble on here BUT there is one point to make.
The reductive 'explanation' it advances consists of several million lines of code and terabytes of neural weightings. No human being could understand this 'explanation'. But is an explanation that nobody can understand, an explanation at all? The situation here is similar to the one that exists wrt
unsurveyable proofs.
Mark