The standard approach for larger setups is to start sharding Prometheus. In Kubernetes it's common to have a Prometheus-per-namespace.
You may also want to look into how many metrics each of your pods is exposing. 20GB of memory indicates that you probably have over 1M prometheus_tsdb_head_series
Changing the scrape interval is probably not going to help as much as reducing your cardinality per Prometheus.
For example, we have a couple different shards. One is using 33GB of memory and managing 1.5M series. The other shard is 38GB and managing 2.5M series. We allocate 64GB memory instances for these servers.
If you don't want to go down the sharding route, you'll likely need some larger nodes to run Prometheus on.