After you have a few hundred entries in a single ziplist-encoded hash, the overhead of the key and structures involved are a fraction of a percent of the encoded size, so all you are doing at that point is wasting computational resources in the encoding/decoding steps of using a hash that large. If you want to minimize key use to make things like "KEYS *" useful (though I would still discourage such things), I'd cap it at a few thousand elements at most, if only to minimize memory scanning and churn during insertion/deletion (if those matter to you).
With respect to plain sets, yes, you do see an 8-9x reduction in space when data is stored as a hash-set vs. a ziplist-encoded set, but that is at the expense of performance during insertion, removal, member checking, etc. As in many things with computers, there is a time/space tradeoff. Redis chose (generally) to be fast, not small. It has made some concessions with the ziplist encodings, which addresses the issue for most people (though it is not perfect).
Rather than asking the limits of what Redis can do for you, if you were to tell us what you actually wanted to do with Redis, we would likely be able to give you better advice. I say this after having answered several hundred questions like yours over the last 4+ years.