Yup. The thing is, you can get good results out of it, but the more
you know going in, the better results you can expect coming out. Think of it this way: ChatGPT may have "read" a thousand articles on iono phenomena and be able to make some amount of logical inference from causes, to effects, to how they would appear on a plot. But it has never seen a GRAPE plot before and it doesn't immediately know how to read one. Without prompting, it's not likely to make the connection that a sudden jump in magnitude and a lowering of hmF2 happens at sunrise, and without prompting it doesn't know *when* the sun rises over the path between you and Fort Collins anyway. The context that it does have is that you told it that some kind of geomagnetic event was happening, so it interprets everything in light of that. And we all know that these things have a bias towards overconfidence.
So for best results, 1) give it plenty of context, 2) question its results whenever they seem fishy or insufficiently justfied, 3) ask it in advance to explain its reasoning step-by-step. Just like humans, when it has to explain it's more likely to catch itself in an error rather than dig itself further in.