Hi Jan,
I can not answer "The question is, is there something like this on the BeeGFS roadmap?"
But, I would like to give you some insights into our experience with the beegfs-chunk-parity project, which I am the architect of.
First, it is probably not ready for production-use by anyone else than the developers of the tool.
Having said that. It has been in production use for almost two years at Aarhus University, where they run a BeeGFS installation with 50 storage targets (3.5PB) . With 50 storage targets the
chance of having one failing is dangerously high, which is why the beegfs-chunk-parity project was developed.
Every storage target records a change log. From this change log a continues parallel process runs on the storage servers and updates the parity in the background. This approach has the huge benefit, that it does not affect the latency of file operations on the filesystem.
In November last year, Aarhus University lost an entire storage target ~80TB. It was successfully recovered (using this chunk-parity) and the entire filesystem were up and running after 1-2 week of downtime, with a small amount of recent files missing. This saved us from a restore from backup, which would have required months and the pain of users realising that the backup only covers the most important files (1/5th).
cheers, Rune