Folks,
<Cross posting from slack>
I am looking at Ceph Object Store in the EC configuration. It looks like the crds.yaml defines data chunk max to be 9 - is there a reason for this?
erasureCoded:
description: The erasure code settings
properties:
algorithm:
description: The algorithm for erasure coding
type: string
codingChunks:
description: Number of coding chunks per object in an erasure coded storage pool (required for erasure-coded pool type)
maximum: 9
minimum: 0
type: integer
dataChunks:
description: Number of data chunks per object in an erasure coded storage pool (required for erasure-coded pool type)
maximum: 9
minimum: 0
I am looking to create and test a 17,3 combination, so any help would be great!
Some background:
- We are looking at a very large, very cold multi-petabyte storage: we want to right-size the cluster(s) and have a low-storage overhead.
- We are looking at 100 drive JBODs + some SSD nodes + 25/25Gbps link setup.
- The theory here is that considering the low throughput, we may be able to get away with (17,3) - we are looking for real world advice.
- ceph itself doesn't have any such restrictions on the # of datachunks, rook seems to impose it, so I am wondering why.
Thanks. Subu