Hi,
I would like to configure a Clickhouse table with a ReplicatedMergeTree engine associated to a policy to transfer data from hot local storage to cold S3 remote storage after a few days.
Clickhouse is fine to do that.
However, I need resilience for the hot storage by duplicating data ("ReplicatedMergeTree") but I do not need resilience for the cold storage since the S3 backend is doing that for me (with erasure coding). If I use a ReplicatedMergeTree, the data are going to be replicated in S3 as well (in addition to S3 erasure coding).
Is there a simple way to avoid duplicating data in S3 ?
Would any modifications in Clickhouse code be complex or not ?
Thanks for your help !
Regards,
Pierre