Thu Aug 30 12:16:42 [Balancer] ns: production.people going to move { _id: "production.people-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('503e9d5ef940d75c2de07f8e'), ns: "production.people", min: { _id: MinKey }, max: { _id: 304836 }, shard: "s1" } from: s1 to: s2 tag []
Thu Aug 30 12:16:42 [Balancer] moving chunk ns: production.people moving ( ns:production.people at: s1:s1/mongo11.example.com:20011,mongo12.example.com:20012 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: 304836 }) s1:s1/mongo11.example.com:20011,mongo12.example.com:20012 -> s2:s2/mongo21.example.com:20021,mongo22.example.com:20022
Thu Aug 30 12:16:43 [Balancer] moveChunk result: { cause: { errmsg: "migrate already in progress", ok: 0.0 }, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", ok: 0.0 }
Thu Aug 30 12:16:43 [Balancer] balancer move failed: { cause: { errmsg: "migrate already in progress", ok: 0.0 }, errmsg: "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress", ok: 0.0 } from: s1 to: s2 chunk: min: { _id: MinKey } max: { _id: MinKey }
mongos> db.locks.find();
{ "_id" : "admin-movePrimary", "process" : "Web1:27069:1343571761:1804289383", "state" : 0, "ts" : ObjectId("501555d74439248d85dc8867"), "when" : ISODate("2012-07-29T15:25:11.099Z"), "who" : "Web1:27069:1343571761:1804289383:conn122:1714636915", "why" : "Moving primary shard of admin" }
{ "_id" : "example_production-movePrimary", "process" : "Web1:27069:1343571761:1804289383", "state" : 0, "ts" : ObjectId("501553614439248d85dc885a"), "when" : ISODate("2012-07-29T15:14:41.616Z"), "who" : "Web1:27069:1343571761:1804289383:conn1:1681692777", "why" : "Moving primary shard of example_production" }
{ "_id" : "example_production_vanity-movePrimary", "process" : "Web1:27069:1343571761:1804289383", "state" : 0, "ts" : ObjectId("501552fb4439248d85dc8855"), "when" : ISODate("2012-07-29T15:12:59.598Z"), "who" : "Web1:27069:1343571761:1804289383:conn1:1681692777", "why" : "Moving primary shard of example_production_vanity" }
{ "_id" : "balancer", "process" : "web1:27069:1346283357:314909341", "state" : 2, "ts" : ObjectId("503f4f2a3c113ffbd8e4a7e9"), "when" : ISODate("2012-08-30T11:31:54.320Z"), "who" : "web1:27069:1346283357:314909341:Balancer:1842493053", "why" : "doing balance round" }
{ "_id" : "example_production.people", "process" : "mongo11:20011:1346282264:758785138", "state" : 0, "ts" : ObjectId("503f4f2ce69a6c2009e22331"), "when" : ISODate("2012-08-30T11:31:56.182Z"), "who" : "mongo11:20011:1346282264:758785138:conn37208:1670912857", "why" : "migrate-{ _id: MinKey }" }
{ "_id" : "example_production.new_coll", "process" : "web1:27069:1346283357:314909341", "state" : 0, "ts" : ObjectId("503f36eb3c113ffbd8e4a6a1"), "when" : ISODate("2012-08-30T09:48:27.208Z"), "who" : "web1:27069:1346283357:314909341:conn37665:149759223", "why" : "drop" }
{ "_id" : "example_production_vanity.metrics", "process" : "web1:27069:1346283357:314909341", "state" : 0, "ts" : ObjectId("503f37833c113ffbd8e4a6aa"), "when" : ISODate("2012-08-30T09:50:59.474Z"), "who" : "web1:27069:1346283357:314909341:conn37665:149759223", "why" : "drop" }
Thanks!
Dani
I've got a similar problem with my database, I had 2 shards, I added a 3rd, and now the third one won't accept any data. the mongos logs show the same "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress" message that Daniel got. I've tried bouncing just the mongoses, then both the mongoses and the mongods, but I still get the same message. I also get
[Balancer] distributed lock 'balancer/dbs3a:27017:1346462580:1804289383' unlocked. but I think that's the balancer giving up it's lock.
Any advice would be great and greatly appreciated.
Geoff