In general this is a tricky topic, linked blog post help me in the past but it won't give you a magical number that's always true.
What I have found is that you really need to worry about one thing: the number of time series scraped.
With that in mind you can calculate how much memory is needed per time series with a simple query: go_memstats_alloc_bytes / prometheus_tsdb_head_series.
This give you per time series memory cost before GC, now unless you specify custom GOGC env variable for your Prometheus instance you usually need to double that to get the RSS memory cost.
Then we need to add other memory costs to the mix and these are less easy to quantify, there are other parts of Prometheus that use memory and queries will eat more or less memory depending on how complex they are etc. So it gets more fuzzy from there. But in general memory usage will scale with (since it's mostly driven by) the number of time series you have in Prometheus (prometheus_tsdb_head_series tells you that).
Now another complication is that all time series stay in memory for a while even if you scrape them only once. If you plot prometheus_tsdb_head_series over a few hours range you should see it go down every now and then, there's metrics garbage collection that happens (which you can see in logs) and also blocks get written from in-memory data every 2h (by default AFAIR). And this is an important thing to remember - if you have a lot of "event like" metrics that are exported only for a few seconds, for example if labels on metrics keeps changing all the time because some services put things like user IDs, requests paths etc, then that will get accumulated in memory until gc/block write happens. Again - prometheus_tsdb_head_series will show you that - if it just keeps growing all the time then so will your memory usage.
tl;dr keep an eye on prometheus_tsdb_head_series and you'll see how many time series you're able to fit into your instance