That's what they refer to as an AI "hallucination."
I once tested ChatGPT's logical prowess. To sum it up, I had stated "A is true," but somehow it got that wrong and carried on for several messages assuming "A is false," and once corrected, it got the logic right, but it was even more cumbersome to correct than a human with "No, I meant that did NOT versus DID happen."
By then, ChatGPT had made a bunch of other assumptions and conclusions based on that and had to basically start over without the prior conversation.
I note that it helps me a lot to understand JavaScript writing, for example, but it definitely writes code that will fail in the exact context of the desired use, imagining packages or commands that might not fit every real situation.