Rook EC restriction

28 views
Skip to first unread message

Subu Sankara Subramanian

unread,
Nov 4, 2021, 12:53:56 PM11/4/21
to rook-dev
Folks,

<Cross posting from slack>

 I am looking at Ceph Object Store in the EC configuration. It looks like the crds.yaml defines data chunk max to be 9 - is there a reason for this?
erasureCoded: description: The erasure code settings properties: algorithm: description: The algorithm for erasure coding type: string codingChunks: description: Number of coding chunks per object in an erasure coded storage pool (required for erasure-coded pool type) maximum: 9 minimum: 0 type: integer dataChunks: description: Number of data chunks per object in an erasure coded storage pool (required for erasure-coded pool type) maximum: 9 minimum: 0

I am looking to create and test a 17,3 combination, so any help would be great!

Some background:
- We are looking at a very large, very cold multi-petabyte storage: we want to right-size the cluster(s) and have a low-storage overhead.
- We are looking at 100 drive JBODs + some SSD nodes + 25/25Gbps link setup.
- The theory here is that considering the low throughput, we may be able to get away with (17,3) - we are looking for real world advice.
- ceph itself doesn't have any such restrictions on the # of datachunks, rook seems to impose it, so I am wondering why.


Thanks. Subu

Travis Nielsen

unread,
Nov 4, 2021, 1:34:15 PM11/4/21
to Subu Sankara Subramanian, rook-dev
Hi, this was just answered in slack, right? While it's possible to have a larger chunk count, it's not recommended since it would take a lot of resources to restore lost data.

Travis

--
You received this message because you are subscribed to the Google Groups "rook-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rook-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rook-dev/59a3ca16-5766-420e-8fb2-741a39205b40n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages