Hi,
We're setting up hraven to work on our own clusters, and have problems with the job_history_raw table. This table has almost monotonically increasing keys, so it becomes an issue splitting this table.
We now think of two possible setups. The first one is that setting the max_filesize to several GB, then the table would be split every few days. And all subsequent writes go to the last region, while the previous regions remain half the size. To resolve this, we need to run major compactions at about the same frequency.
The other one is to set max_filesize to a really large value, together with the proper TTL, so we're restricted to one region at all times. But this way the load is very unbalanced.
Could you offer some suggestions?
Thanks!