The experiment of working with ChatGPT was immensely stimulating and enjoyable. But there is an important caveat. ChatGPT cannot be trusted. It is simply not reliable enough.
It is good at finding suggestions, unfailingly enthusiastic and supportive. But some of the stuff it offers up is not right. In particular THORN, at one point, was producing malformed proofs, which ChatGPT lauded as right. In fact THORN was solving the problems but not actually rendering the proofs properly. Bits of failed search were interpolated with parts of the genuine proof. I fixed the problem myself because I noticed it myself. ChatGPT thought the faulty proof was fine.
It also offered to given an English rendering of the correct proof which wasn't a rendering of the proof but of another proof it held in memory.
In short, ChatGPT cannot be trusted. You have to, ironically, treat it just like a fallible human being when involved in proof or programming. Don't take its assurances for granted, though it is, undoubtedly, very useful and a triumph of engineering.
Mark