There are indeed 1,750,000 timeseries. The labels themselves don't use up any space, except in the timeseries index. A rule of thumb is that about 2 million timeseries is the point where you start thinking about splitting up scrapes between multiple servers.
You can't aggregate before writing (unless you write your own exporter which does this). Or you could use statsd_exporter, and have all the targets push their counter updates to this.
You can use a recording rule to generate the aggregate - and then when you scrape the /federate endpoint pass a match[] query so that only the aggregate timeseries is returned.
Note that if you simply stripped the labels, you would get conflicting data. For example, at one scrape instant you might have:
http_requests_total{methed="GET", code="200"} 100
http_requests_total{methed="GET", code="200"} 50
Is the value of the counter at this point in time 100 or 50? Answer: it's neither (it should be 150). And if you look at the metric over time, it would bounce up and down as it flips between different counter values.