This:
go_memstats_alloc_bytes / prometheus_tsdb_head_series
gives you bytes per time series. This number is the base of my capacity planning. It depends on how many labels end up on your metrics but usually it's fairly stable. We usually see around 3-4KB per time series.
If you have read_read or lots of expensive queries this number will include that, which can make it less stable.
process_resident_memory_bytes depends on a few factors.
First GOGC settings, by default it's 100 and so process_resident_memory_bytes might get around 2x the size of Go heap as the result.
Second data on disk and some recent but not yet written to disk time series will be mmapped and that memory will end up in process_resident_memory_bytes, but the OS will manage this so process_resident_memory_bytes might change in ways that are a bit unpredictable.
In general there's a lot of different components that eat memory, usually the biggest one is all the scraped time series, that's why I only focus on
go_memstats_alloc_bytes / prometheus_tsdb_head_series ratio for capacity planning.
If you take your average "bytes per time series", then multiply that by the number of time series you must store, then by 2 (to account for GC and other things eating memory) then you usually get a rough idea how much memory you need.