Artificial Intelligence products from my 2.1 meter Dish

84 views
Skip to first unread message

Pablo Lewin

unread,
Mar 29, 2026, 2:17:28 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers

I’d like to share a set of results from a 1,000-spectrum dataset collected with my 2.1 meter radio telescope and ask for peer review from SARA members.

Because ChatGPT has a 10-file upload limit, I had to combine the spectra into zipped files so the full dataset could be processed with the Cosmic GHz Decoder. I used ChatGPT only to generate the graphs, reports, and GIFs from the spectra. From that work, I produced 1-page, 5-page, and 10-page reports.

I’m posting these results for review because I want to know whether any of the interpretations may be incorrect or whether any conclusions could be AI hallucinations.

This is not meant to replace EZRA, which is an excellent and well-established program. I see AI as an additional tool that can help organize results and explain in simpler language what the data may be showing.

I’d appreciate any comments, corrections, or concerns, especially if you notice anything questionable in the plots, reports, or interpretations.glendora_transits_through_the_day.gif

Pablo Lewin WA6RSV

Pablo Lewin

unread,
Mar 29, 2026, 2:18:31 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
more stuff
glendora_public_friendly_report.pdf
glendora_radio_astronomer_report_10page.pdf

Pablo Lewin

unread,
Mar 29, 2026, 2:19:25 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
glendora_milky_way_arm_transit_overlay.png
glendora_centered_peak_spectra.png
glendora_lv_rotation_overlay_lsrk.png
glendora_HI_dynamic_spectrum (1).png
glendora_HI_dynamic_spectrum.png
glendora_centered_transit_profiles_absolute.png
glendora_centered_transit_animation.gif
glendora_milky_way_arm_transit_simulation.png
glendora_lv_rotation_overlay.png

Pablo Lewin

unread,
Mar 29, 2026, 2:21:06 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
On Saturday, March 28, 2026 at 11:19:25 PM UTC-7 Pablo Lewin wrote:
glendora_centered_transit_profiles.csv
glendora_HI_moments.csv

Stephen Arbogast

unread,
Mar 29, 2026, 2:54:05 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
Pablo, I am  being   very  careful   these  days..  I   started   with    a  HP  calculator in  the  1970's  I use   ezRA  now which uses well   established  Python  libraries  for  Radio  Astronomy....

Pablo Lewin

unread,
Mar 29, 2026, 4:00:43 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
Ok...Thank you for your comment.

Eduard Mol

unread,
Mar 29, 2026, 4:05:21 AM (4 days ago) Mar 29
to sara...@googlegroups.com
Hi Pablo. 

I don’t have the time to go over it right now and don’t quite  understand what all the plots are showing exactly. The HI driftscan plots look OK though on first glance. 
If it’s all HI data it’s fairly easy to validate by checking the profiles against the LAB HI survey and processing your spectra with an existing and well documented processing pipeline (for example EzRA) to compare the results. 

Also, how exactly did you use the LLM in your processing workflow? Did you already have a plan on how to process the data and vibe-coded your way through it, or did you just hand over the data straight to the LLM with some instructions and hope for the best? These are two very different approaches, and one of those is much more controlled and easier to check / replicate than the other…

Eduard


Op zo 29 mrt 2026 om 08:54 schreef 'Stephen Arbogast' via Society of Amateur Radio Astronomers <sara...@googlegroups.com>
Pablo, I am  being   very  careful   these  days..  I   started   with    a  HP  calculator in  the  1970's  I use   ezRA  now which uses well   established  Python  libraries  for  Radio  Astronomy....

On Sunday, March 29, 2026 at 12:21:06 AM UTC-6 Pablo Lewin wrote:
On Saturday, March 28, 2026 at 11:19:25 PM UTC-7 Pablo Lewin wrote:

--
--
You received this message because you are subscribed to the Google
Groups "Society of Amateur Radio Astronomers" group.
To post to this group, send email to sara...@googlegroups.com
To unsubscribe from this group, send email to
sara-list-...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/sara-list?hl=en
---
You received this message because you are subscribed to the Google Groups "Society of Amateur Radio Astronomers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sara-list+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sara-list/2e9c5501-9e65-4b6d-a2a0-3bb115f9f34fn%40googlegroups.com.

Stephen Arbogast

unread,
Mar 29, 2026, 4:27:08 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
I  totally   agree   with  Eduard   Mol..    very  important  that  we  use proven  and   well  documented procedures    such  as  ezRA.....

Pablo Lewin

unread,
Mar 29, 2026, 4:50:12 AM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers

Hi Eduard,

Thanks — that is helpful feedback.

You are right that the core result is really the H I drift-scan itself. Most of the additional plots were just different ways of looking at the same dataset, and some are more interpretive than others.

The ones I would consider the primary data products are:

  1. H I strength versus time / RA for the zenith drift scan
    This is the basic “where is the 21 cm line strongest as the sky drifts through the beam?” plot.
  2. Representative spectra
    A few example spectra from weak-signal and strong-signal parts of the drift, mainly to show that the line shape changes in a sensible way.
  3. Centered transit plots
    These just align the strongest H I transits on their peak so their shapes can be compared directly.
  4. Dynamic spectrum / waterfall
    Time versus velocity, showing how the H I line strength and centroid evolve through the drift.
  5. Moment plots
    Integrated line strength, centroid velocity, and linewidth versus time. These are just compressed summaries of each spectrum.

The more model-dependent plots were:

  • the schematic Milky Way arm transit overlay
  • the l-v style plot with arm-crossing guide bands
  • the crude rotation-curve overlay
  • the two-component fit in the Cygnus direction

I would treat those as exploratory / interpretive figures, not as primary validated results.

The more observationally grounded diagnostic plots were:

  • zenith track in Galactic coordinates with H I strength
  • sidereal repeatability from day to day
  • transit width versus expected beam-crossing time
  • LSRK correction terms versus time

So in short: the solid part is the repeatable H I drift scan itself; the rest is mostly derived visualization and interpretation layered on top of it.

On the LLM question: it was not a case of “upload the data and hope for the best,” but it also was not a pre-written rigid pipeline from the start.

The actual workflow was closer to this:

  • I already knew the observation was a fixed zenith 21 cm drift scan.
  • I used the LLM to help inspect the archive, parse the spectra, sort by timestamp, and generate analysis code iteratively.
  • Then I used it to produce successively more derived plots: folded repeatability, RA/Galactic-coordinate mapping, velocity conversion, LSRK correction, and some schematic interpretation plots.
  • At each step I checked whether the results were at least physically self-consistent: repeated at sidereal time, strongest near Galactic-plane crossings, sensible velocity sign changes, etc.

So I would describe it as an assisted exploratory reduction, not a blind black-box reduction, but also not yet a formally validated pipeline result.

I completely agree that the right next step is to validate the spectra against the LAB H I survey and run the same data through an existing documented pipeline such as EzRA. That would be the clean way to separate:

  • what is genuinely supported by the data,
  • what is just a useful visualization,
  • and what is too interpretive.

So my own summary would be:

  • the H I drift scan detection itself looks real,
  • the repeatability looks encouraging,
  • the derived velocity/LSRK products are plausible,
  • but the whole reduction still needs external validation against LAB and/or EzRA before I’d treat the more interpretive plots as anything beyond exploratory.

That is probably the fairest characterization of what was done.

Best,
Pablo

Eduard Mol

unread,
Mar 29, 2026, 5:50:31 AM (4 days ago) Mar 29
to sara...@googlegroups.com
Ah okay, thanks for the explanation. 

So if I understand it correctly, you are breaking the task down into smaller and more controllable steps, but you are not generating pieces of code and running these? 
I suppose (?) the LLM is generating snippets of code in the background and running the data through these, probably in Python because I recognise the plotting style from Matplotlib. Otherwise I don’t really see how an LLM would be capable of processing numerical data in a reliable way. But of course I could be wrong, I am far from an expert in this field. 

Another fun experiment would be to repeat the data processing with the exact same prompts a few times and check if you get the same results. Or slightly altering the input prompts in a way that in theory should not matter for the end result (for example, a different dish size should not matter for LSR correction). If you suddenly see different results then this would be a strong indication that you are dealing with a “black box” with probabilistic outcomes that are not well constrained. 

Op zo 29 mrt 2026 om 10:50 schreef Pablo Lewin <pabl...@gmail.com>

Pablo Lewin

unread,
Mar 29, 2026, 12:53:23 PM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers

Hi Eduard,

Yes, I think that is basically right.

What I’m trying to explore is whether artificial intelligence can make it easier for amateur radio astronomers not only to reduce data, but also to explain what the data actually mean in physical terms.

For me, that second part is just as important as the plotting. A concrete example is dedispersion. Before feeding this material into AI, I did not really understand that pulses do not arrive at all radio frequencies at the same time, or that the interstellar medium acts like a thin plasma so lower radio frequencies arrive later than higher ones, with a delay that scales roughly as frequency to the power of minus two. That is something AI was able to explain to me clearly as a non-professional citizen scientist, and that kind of guided learning is a big part of why I’m interested in this.

So the goal is not “trust the black box and accept whatever comes out.” The goal is to see whether AI can become a useful assistant for amateurs in two ways:

  1. helping break reduction into smaller, more understandable steps
  2. helping interpret the results in scientifically meaningful language

In this case, yes, the workflow involved AI-assisted generation of analysis steps and code, which was then used to inspect and visualize the data. So it is not purely manual, but it is also not just “upload data and hope for the best.” I’m trying to make the process more understandable and accessible, especially for people who are not professional astronomers or programmers.

I also think your suggestion about repeatability has a lot of merit. Re-running the same prompts, or changing prompts in ways that should not affect the physics, is exactly the kind of test that would help show whether the method is robust or whether it is behaving too much like a probabilistic black box. That seems like an important next step.

Ultimately, I don’t see AI as a replacement for standard validation against things like LAB or a documented pipeline such as EzRA. I see it more as a potentially invaluable tool for helping citizen scientists reduce data, learn the concepts behind the analysis, and gradually get better at deciphering what the cosmos is telling us.

That is really why I’m working on this.

Best,
Pablo

Andrew Thornett

unread,
Mar 29, 2026, 1:00:42 PM (4 days ago) Mar 29
to sara...@googlegroups.com
Until your message here, Pablo, I had never heard of dedispersion! Must admit I typed it into ChatGPT, and am now much more informed! 😊 
Andy


From: sara...@googlegroups.com <sara...@googlegroups.com> on behalf of Pablo Lewin <pabl...@gmail.com>
Sent: Sunday, March 29, 2026 5:53:23 PM
To: Society of Amateur Radio Astronomers <sara...@googlegroups.com>
Subject: Re: [SARA] Re: Artificial Intelligence products from my 2.1 meter Dish
 

Adrian

unread,
Mar 29, 2026, 3:49:03 PM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
Dedispersion is a basic tenet of FRB work in both identification and distance measurement estimations.

Pablo Lewin

unread,
Mar 29, 2026, 3:54:38 PM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
  Dedispersion is one of the absolute basics of FRB work, central to both identifying bursts and estimating their distances. As amateur radio astronomers, we really ought to understand concepts like this, because they sit at the core of interpreting what we observe rather than just collecting data for the sake of it. The difficulty, of course, is that not all of us have immediate access to a dedicated carbon-based unit with the right astrophysical background to explain these things on demand. That is where AI becomes genuinely useful—not as a replacement for understanding, but as a readily available tool that can fill in the gaps, explain the theory, and help connect the dots between raw spectra, dispersion measures, and the underlying physics. For amateurs working increasingly close to professional methods, that kind of support can make the difference between simply recording signals and actually understanding what those signals mean.  

Adrian

unread,
Mar 29, 2026, 4:48:42 PM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
I 100% agree.

Robert Meade

unread,
Mar 29, 2026, 8:33:59 PM (4 days ago) Mar 29
to sara...@googlegroups.com
Pablo,

Can you share a summary of what/how you prompted GPT to generate the "reports"? Are the "reports" you had it generat the actual output 1 page, 5 page, and 10 page reports/papers you referenced?

-Robert

Pablo Lewin

unread,
Mar 29, 2026, 9:55:35 PM (4 days ago) Mar 29
to Society of Amateur Radio Astronomers
Robert (Any connection with the Meade Telescopes?)

Here are the prompts I used and were recommended by AI as well the reports attached

Best

Pablo Lewin WA6RSV

  • Analyze the 2.1 m radio telescope data from Glendora on March 18, 2026 local time
  • It’s a drift scan
  • The dish was pointing straight up all the time
  • Graph the transits where the highest peak is at the center
  • What further analysis / interpretations / graphs can you add here?
  • Can you graph a simulation of the Milky Way arm transits based on this data?
  • Make an l–v style plot that compares the measured Doppler velocities to the simulated arm-crossing directions
  • Add a crude Milky Way rotation-curve overlay on the same l–v plot
  • Make an LSR-corrected version using the exact observing times and Glendora location
  • Anything else you can add?
  • Summarize the whole thing so a non-scientist can understand
  • Make it a one-page public-friendly report with captions for each graph
  • Make it a 5-page report for radio astronomers
  • Make a more complete 10-page report with references, graphs, and formulas
  • What does beam dilution mean?
  • What about LSRK?
  • Can you make a GIF or a video of the transit graph?
  • Make a GIF of the transits as the peaks go up and down through the day
  • What’s the brightest pulsar I can detect at 1.4 GHz?
  • Proceed with the list
  • First explain time resolution, dedispersion, folding, and RFI control, then do the worksheet
  • Validate the spectra against the LAB H I survey

Science prompts I recommended / next-step suggestions:

  • Validate the spectra against the LAB H I survey
  • Run the same raw data through EzRA or another documented pipeline for comparison
  • Build an RA / Galactic-coordinate annotated drift map
  • Make a transit-centered waterfall / dynamic spectrum
  • Plot selected velocity channels versus time
  • Measure transit width versus expected beam-crossing time
  • Quantify day-to-day sidereal repeatability
  • Use the off-line baseline as a crude continuum proxy
  • Make a schematic Milky Way arm-transit overlay
  • Build an l–v plot with measured velocities and arm-direction guides
  • Add a crude Milky Way rotation-curve overlay
  • Apply per-scan LSRK corrections from timestamps and site location
  • Fit multiple velocity components in the Cygnus-direction H I profile
  • Build a face-on Milky Way style reconstruction from sampled longitude/velocity windows
  • For pulsars, use a wideband pulsar backend rather than a narrow H I backend
  • For pulsars, start with a known bright slow pulsar such as B0329+54
  • For multi-dish H I work, use calibrated spectral stacking rather than pretending it is interferometry

glendora_radio_astronomer_report_5page.pdf
glendora_public_friendly_report.pdf
glendora_radio_astronomer_report_10page.pdf

Eduard Mol

unread,
Mar 30, 2026, 1:16:14 AM (4 days ago) Mar 30
to sara...@googlegroups.com
If one really wants to use LLMs for this I’d caution against using very open ended prompts like “what further analysis/ interpretations can you add?” and such. We are all aware by now (I hope) of the problems and pitfalls inherent to LLMs. In particular, when doing such “vibe astronomy” there is always a chance that the LLM will send you down into nonsensical rabbit holes instead of providing good info (after all, LLMs have no concept of “correct” and “false” information). I’ve seen several examples of this in the wild already. 

Op ma 30 mrt 2026 om 03:55 schreef Pablo Lewin <pabl...@gmail.com>

Pablo Lewin

unread,
Mar 30, 2026, 2:15:29 AM (3 days ago) Mar 30
to Society of Amateur Radio Astronomers

Hi Eduard,

That is a fair caution, and I agree with the general point.

But from the perspective of a non-scientist or citizen scientist, there is also a practical problem: if we are not already experts, how are we supposed to know what AI is capable of, what it is not capable of, and even what kinds of things are available to try?

That is part of what I am trying to learn by doing.

I am not claiming that open-ended prompts are the ideal scientific method. I understand the risk that an LLM can send someone down a nonsensical rabbit hole or produce something that sounds convincing without being well grounded. That is exactly why I think this needs to be explored openly and critically rather than either blindly trusted or dismissed outright.

For non-scientists, AI can still have real value as a starting point:

  • helping break a big problem into smaller steps
  • suggesting analyses we might not know to ask for
  • explaining unfamiliar concepts in accessible language
  • helping us learn enough to ask better questions afterward

The danger is obvious, but so is the opportunity.

In other words, I do not see this as “vibe astronomy instead of science.” I see it as a learning and assistance tool that still has to be checked against real data, known pipelines, surveys, and expert feedback. For people outside the professional community, that guidance function may be one of the most important uses of AI.

So I think your warning is valid, but I also think there has to be room for non-scientists to experiment with these tools in order to understand both their usefulness and their limits. Otherwise we are being told to be cautious about capabilities we have never had the chance to test or even understand.

That is really the spirit in which I am approaching this.

Best,
Pablo

Reply all
Reply to author
Forward
0 new messages