We are experiencing an odd issue with a new sharded cluster we are setting
up. There are 5 shards, setup before any data is loaded in. The shard key
is the _id and the format of the key is:
XXXXXXX_MongoID where XXXXXXX is a random number.
The weird issue we are having is that when we insert a bunch of documents
into the shard and splits happen, the splits do not seem to follow the
chunk size (which has not been altered and should be 64MB). So we will
have some chunks that have a big range and others with a very very small
range. So for example:
An example of a big range:
items in this chunk: 254199
An example of a small range:
items in this chunk: 109
Then, if you examine the items in the small range example, there are no
items that are abnormally huge or anything that would make them 64MB. So
it really feels like the mechanism to split chunks is running away every
once in a while and is splitting them more than they should. The problem
that results from this is that because the chunks are not the same size,
even when the number of chunks are balanced between the shards, the amount
of data between each shard is completely unbalanced.
We are running 2.2.1 on every box in the system, mongod, config servers and
Any ideas what could be at play here that I'm missing?