Dear all,
I am at a very early stage in amateur HI radio astronomy and have been trying to move away from the usual heuristics I often see (fixed numbers of integrations, “rule-of-thumb” observing times, etc.), which I find hard to justify quantitatively when planning observations with small instruments.
As an exercise, I have been working through a simple, iterative planning approach based on the measured RMS of pilot observations and the radiometer equation. In essence, the idea is to (i) measure the RMS after baseline removal in line-free regions, (ii) compare the observed RMS with the expected thermal RMS to check whether the system is still in a thermally dominated regime, and (iii) only scale integration time according to the law while that condition holds. When the RMS stops decreasing efficiently and appears to asymptotically approach a constant value, this is treated as an instrumental/systematic floor and used as a practical stopping criterion, rather than continuing to integrate blindly.
I am not claiming any novelty here; this is mainly an attempt to formalize, in a transparent way, decisions that are often made implicitly. I would very much appreciate feedback from the community on whether this conceptual approach makes sense in practice, what assumptions might be too optimistic for small-dish / SDR-based setups, and whether there are known statistical or instrumental pitfalls (e.g. correlated noise, baseline effects, RFI handling) that tend to invalidate this kind of RMS-based planning if one is not careful.
Comments on how applicable (or not) this framework is across different receivers, backends, or observing strategies would also be extremely helpful. I am very open to corrections and criticism, as this is primarily a learning exercise.
Best regards,
Tiago Baroni

--
