const size_t CHUNKSIZE = 64u << 20; //!< (64MB)
You can change that line as you desire, BUT;
This is not something that was attempted before. We have good reasons to believe that
changing this value to a size that isn't power of two can cause problems.
One potential problem is that b+tree keys assume certain number of low order bits
in chunk position to be 0. The other thing is that a chunk size less than some internal parameters
(such as checksum block size) is unlikely to work. Note that this is not an exhaustive list of things
that might go wrong. Also,reducing chunk size should cause further performance penalties such as
increasing meta server RAM usage and chunk budgets, hence the overhead.
Hope this helps,
Mehmet
Hi Bo,Chunk size parameter is defined in common/kfstypes.h by the following line:
const size_t CHUNKSIZE = 64u << 20; //!< (64MB)Currently, the API doesn't offer a function to set chunk size to an arbitrary value.
You can change that line as you desire, BUT;
This is not something that was ever attempted before. We have good reasons to believe that
export QFS_CLIENT_CONFIG=client.maxReadSize=<value>
pure virtual method called
terminate called without an active exception