This is a classic use case for Federation.
You have a high frequency scrape server that keeps the raw data for some amount of time. I would probably keep 1 month around, say 35 days of raw data. It depends a bit on how much you're scraping.
Then for long-term history, you have a set of recording rules like this:
groups:
- name: network bandwidth
interval: 1m
rules:
- record: instance:IfInOctets:rate1m
expr: rate(IfInOctets[1m])
- record: instance:IfOutOctets:rate1m
expr: rate(IfOutOctets[1m])
This will give you 1 minute averaged data. The Federation server can then scrape only these rules and store them for much longer periods of time.
There's no need to adjust --storage.tsdb.min-block-duration, the latest Prometheus release (2.19.0) now writes out head chunk data via mmap, eliminating the memory penalty for higher frequency scrapes.
All of that said, you might consider looking at Thanos. It provides tiered downsampling and resolution retention. This allows you to keep raw data for up to some amount of time, but it will transparently provide downsample data for much longer time ranges. Providing the best of both worlds, fast raw scrapes, and forever retention of raw counter data.