That does seem way too large. I assume your read nodes are in ro mode?
So, I run reads in a little different way. I tend to have a usage pattern where there are lots of parallel requests. Because of this, my latest infrastructure has 5x docker containers on each data node with 4GB of heap and HAProxy balancing requests across them.
This gives me 40 TSDB read instances. At around 1,200rps the read nodes are mostly idle in terms of resource usage.
This cluster does not yet have it but Turn built Splicer which is a query tool for OpenTSDB. It will shard queries into 1 hour blocks and cache the result blocks in Redis. This is great for installations where most of the traffic is dashboards on TV monitors. It will also send the queries to the data node where the region holding the metric being queried is located. This greatly improves read performance.