I’d like to share a set of results from a 1,000-spectrum dataset collected with my 2.1 meter radio telescope and ask for peer review from SARA members.
Because ChatGPT has a 10-file upload limit, I had to combine the spectra into zipped files so the full dataset could be processed with the Cosmic GHz Decoder. I used ChatGPT only to generate the graphs, reports, and GIFs from the spectra. From that work, I produced 1-page, 5-page, and 10-page reports.
I’m posting these results for review because I want to know whether any of the interpretations may be incorrect or whether any conclusions could be AI hallucinations.
This is not meant to replace EZRA, which is an excellent and well-established program. I see AI as an additional tool that can help organize results and explain in simpler language what the data may be showing.
I’d appreciate any comments, corrections, or concerns, especially if you notice anything questionable in the plots, reports, or interpretations.
Pablo Lewin WA6RSV
Pablo, I am being very careful these days.. I started with a HP calculator in the 1970's I use ezRA now which uses well established Python libraries for Radio Astronomy....On Sunday, March 29, 2026 at 12:21:06 AM UTC-6 Pablo Lewin wrote:On Saturday, March 28, 2026 at 11:19:25 PM UTC-7 Pablo Lewin wrote:
--
--
You received this message because you are subscribed to the Google
Groups "Society of Amateur Radio Astronomers" group.
To post to this group, send email to sara...@googlegroups.com
To unsubscribe from this group, send email to
sara-list-...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/sara-list?hl=en
---
You received this message because you are subscribed to the Google Groups "Society of Amateur Radio Astronomers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sara-list+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sara-list/2e9c5501-9e65-4b6d-a2a0-3bb115f9f34fn%40googlegroups.com.
Hi Eduard,
Thanks — that is helpful feedback.
You are right that the core result is really the H I drift-scan itself. Most of the additional plots were just different ways of looking at the same dataset, and some are more interpretive than others.
The ones I would consider the primary data products are:
The more model-dependent plots were:
I would treat those as exploratory / interpretive figures, not as primary validated results.
The more observationally grounded diagnostic plots were:
So in short: the solid part is the repeatable H I drift scan itself; the rest is mostly derived visualization and interpretation layered on top of it.
On the LLM question: it was not a case of “upload the data and hope for the best,” but it also was not a pre-written rigid pipeline from the start.
The actual workflow was closer to this:
So I would describe it as an assisted exploratory reduction, not a blind black-box reduction, but also not yet a formally validated pipeline result.
I completely agree that the right next step is to validate the spectra against the LAB H I survey and run the same data through an existing documented pipeline such as EzRA. That would be the clean way to separate:
So my own summary would be:
That is probably the fairest characterization of what was done.
Best,
Pablo
To view this discussion visit https://groups.google.com/d/msgid/sara-list/cc8ba9bf-29ab-49d5-8cf8-029e4df77684n%40googlegroups.com.
Hi Eduard,
Yes, I think that is basically right.
What I’m trying to explore is whether artificial intelligence can make it easier for amateur radio astronomers not only to reduce data, but also to explain what the data actually mean in physical terms.
For me, that second part is just as important as the plotting. A concrete example is dedispersion. Before feeding this material into AI, I did not really understand that pulses do not arrive at all radio frequencies at the same time, or that the interstellar medium acts like a thin plasma so lower radio frequencies arrive later than higher ones, with a delay that scales roughly as frequency to the power of minus two. That is something AI was able to explain to me clearly as a non-professional citizen scientist, and that kind of guided learning is a big part of why I’m interested in this.
So the goal is not “trust the black box and accept whatever comes out.” The goal is to see whether AI can become a useful assistant for amateurs in two ways:
In this case, yes, the workflow involved AI-assisted generation of analysis steps and code, which was then used to inspect and visualize the data. So it is not purely manual, but it is also not just “upload data and hope for the best.” I’m trying to make the process more understandable and accessible, especially for people who are not professional astronomers or programmers.
I also think your suggestion about repeatability has a lot of merit. Re-running the same prompts, or changing prompts in ways that should not affect the physics, is exactly the kind of test that would help show whether the method is robust or whether it is behaving too much like a probabilistic black box. That seems like an important next step.
Ultimately, I don’t see AI as a replacement for standard validation against things like LAB or a documented pipeline such as EzRA. I see it more as a potentially invaluable tool for helping citizen scientists reduce data, learn the concepts behind the analysis, and gradually get better at deciphering what the cosmos is telling us.
That is really why I’m working on this.
Best,
Pablo
To view this discussion visit https://groups.google.com/d/msgid/sara-list/2d8d15b5-f2ec-4f60-b157-536ad7a04713n%40googlegroups.com.
Science prompts I recommended / next-step suggestions:
To view this discussion visit https://groups.google.com/d/msgid/sara-list/ce7b92f7-f201-4311-92d2-79d606d712cdn%40googlegroups.com.
Hi Eduard,
That is a fair caution, and I agree with the general point.
But from the perspective of a non-scientist or citizen scientist, there is also a practical problem: if we are not already experts, how are we supposed to know what AI is capable of, what it is not capable of, and even what kinds of things are available to try?
That is part of what I am trying to learn by doing.
I am not claiming that open-ended prompts are the ideal scientific method. I understand the risk that an LLM can send someone down a nonsensical rabbit hole or produce something that sounds convincing without being well grounded. That is exactly why I think this needs to be explored openly and critically rather than either blindly trusted or dismissed outright.
For non-scientists, AI can still have real value as a starting point:
The danger is obvious, but so is the opportunity.
In other words, I do not see this as “vibe astronomy instead of science.” I see it as a learning and assistance tool that still has to be checked against real data, known pipelines, surveys, and expert feedback. For people outside the professional community, that guidance function may be one of the most important uses of AI.
So I think your warning is valid, but I also think there has to be room for non-scientists to experiment with these tools in order to understand both their usefulness and their limits. Otherwise we are being told to be cautious about capabilities we have never had the chance to test or even understand.
That is really the spirit in which I am approaching this.
Best,
Pablo