Required disk space depends on compression, and compression depends on data you store. The unit of data in VM is represented as data points (or samples) - pairs of timestamp and value. Data points are then grouped into sorted blocks of data with different compression techniques applied. See more details here
https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932.
So, let's imagine the following situation. You have 100 targets to scrape by Prometheus and every target you scrape returns 1000 data points (every line you see on page `/metrics` for your targets is a data point) and your scrape interval is 15s. Then number of data points Prometheus collects will be the following:
for 1m: 100 (targets) * 1000 (data points per target) * 4 (scrapes in a minute) = 400000 = 400K
for 1d: 24 * 60 * 400K = 576000000 = 576KK
for 30d: 30 * 576KK = 17280KK
Based on the first link "Prometheus uses only around 1-2 bytes per sample". So for Prometheus occupied disk space for 30d will be the following:
1B-per-sample: 17280KK * 1 B = 17.28 GB
2B-per-sample: 17280KK * 2 B = 34.6 GB
VM users report the compression rate from 0.4B to 1.2B per data point. So calculations will be the following:
0.4B-per-sample: 17280KK * 0.4 B = 6.9 GB
1.2B-per-sample: 17280KK * 1.2 B = 20.7 GB
My recommendation is to setup a single version of VM, migrate via vmctl some part of Prometheus data and see how it goes. The dashboard will show details about disk usage, so you'll have a real numbers you can decide no.