sudo cat /var/log/mongodb/mongos.log-20150903 | grep "chunk too big" | grep "XXXXX" | wc -l
db.chunks.find( { "jumbo": true } );
If the split is unsuccessful, MongoDB labels the chunk as jumbo to avoid repeated attempts to migrate the chunk.
Hi Rhys,
As a test, I ran a small scale shard drain using mongos v3.0.5 and adjusted the chunk size so that I could replicate a ‘chunk too big to move‘ scenario.
There were no jumbo chunks to start with, or after the draining of the shard had completed.
However as you’ve described, during the migration process I could see a number of ‘chunk too big to move‘ in the mongos
log entries.
Looking at the log, in a process of moving a chunk, the mongos
will try to split the chunk if it is found to be greater than the current chunk size. Furthermore, the occurrences of these ‘chunk too big to move‘ entries do not correlate to the existence of jumbo chunks.
2015-09-22T11:51:37.579+1000 I SHARDING [Balancer] going to move { _id: "test.mgendata-keyA_"UMVjrH"", ns
: "test.mgendata", min: { keyA: "UMVjrH" }, max: { keyA: "V09LSe" }, version: Timestamp 137000|1, version
Epoch: ObjectId('5600a23794724bd8d4a540bf'), lastmod: Timestamp 137000|1, lastmodEpoch: ObjectId('5600a23
794724bd8d4a540bf'), shard: "mint2" } from mint2() to mint
2015-09-22T11:51:37.579+1000 I SHARDING [Balancer] moving chunk ns: test.mgendata moving ( ns: test.mgend
ata, shard: mint2:mint2/mint17:28001, lastmod: 137|1||000000000000000000000000, min: { keyA: "UMVjrH" },
max: { keyA: "V09LSe" }) mint2:mint2/mint17:28001 -> mint:mint/mint17:28000
2015-09-22T11:51:37.605+1000 I SHARDING [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkS
ize: 2893200, ok: 0.0, errmsg: "chunk too big to move", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('5600a3c3ceb46a8cbda0f6c0') } }
2015-09-22T11:51:37.605+1000 I SHARDING [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 2893200, ok: 0.0, errmsg: "chunk too big to move", $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('5600a3c3ceb46a8cbda0f6c0') } } from: mint2 to: mint chunk: min: { keyA: "UMVjrH" } max: { keyA: "V09LSe" }
2015-09-22T11:51:37.605+1000 I SHARDING [Balancer] performing a split because migrate failed for size reasons
2015-09-22T11:51:37.650+1000 I SHARDING [Balancer] ChunkManager: time to load chunks for test.mgendata: 0ms sequenceNumber: 171 version: 137|7||5600a23794724bd8d4a540bf based on: 137|1||5600a23794724bd8d4a540bf
2015-09-22T11:51:37.650+1000 I SHARDING [Balancer] split results: OK
As I could not see any jumbo chunks created during my test, I ran a separate test to replicate the ‘jumbo’ scenario.
In this test, a number of split processes were unsuccessful and the chunks were flagged as jumbo.
2015-09-22T16:09:15.766+1000 I SHARDING [Balancer] split results: CannotSplit chunk not full enough to trigger auto-split
2015-09-22T16:09:15.766+1000 I SHARDING [Balancer] marking chunk as jumbo: ns: test.mgendata, shard: mint:mint/mint17:28000, lastmod: 3|1||000000000000000000000000, min: { status: "inprogress" }, max: { status: "pending" }
The existence of jumbo chunks that cannot be automatically split prevented the shard drain from being completed, and showed up in the mongos
log as an error. In this case, administrative intervention may be required to clear the jumbo flag.
2015-09-22T16:12:38.286+1000 W SHARDING [Balancer] can't find any chunk to move from: mint but we want to. numJumboChunks: 4
2015-09-22T16:12:38.286+1000 E SHARDING [Balancer] shard: mint ns: test.mgendata has too many chunks, but they are all jumbo numJumboChunks: 4
I suspect repeated attempts to migrate a jumbo chunk adds significant time to the draining of a shard.
As per the above observations, any chunks found to be too large to move will be logged and split into smaller chunks if possible. Any chunks marked as jumbo would not be attempted to be moved and require admin intervention.
Kind Regards,
Wan.
PS: The new storage engine in MongoDB v3.0.x is called WiredTiger
"However as you’ve described, during the migration process I could see a number of ‘chunk too big to move‘ in the mongos
log entries."
Hi Rhys,
This is an expected behaviour, and not a bug.
The occurrences of these ‘chunk too big to move’ entries in the log during the migration process is to convey information of the migration process.
In the logs it shows:
Large chunks can occur in a number of scenarios: for example, when one or more config servers is unavailable there can be no metadata changes (splits or migrations), but data can still be inserted. Similarly, if the chunkSize
has been lowered by an administrator (see: Modify Chunk Size in a Sharded Cluster), size of existing chunks may not be checked until a chunk operation such as migration is attempted.
The log entries are logged by the balancer on the level of I (info)
.
If you would like to, starting in MongoDB v3.0 you can configure verbosity level of the log per MongoDB components.
Kind Regards,
Wan
Hi Rhys,
Seeing that the migration process has completed successfully without any jumbo
chunks, I would consider this working as expected.
If you suspect this issue is impacting a future migration process, please start a new discussion along with the logs for further investigation.
Thanks and regards,
Wan