mongodb takes long pauses / breaks during bulk upserts

50 views
Skip to first unread message

Gerold Böhler

unread,
Apr 26, 2016, 5:08:08 PM4/26/16
to mongodb-user
Hi all,

i am doing bulk upserts to a sharded cluster with bulks of 1000. Everything seems to work fine, except that in the middle of the script it suddenly pauses - sometimes just minutes, sometimes half an hour - and then proceeds normally. Everything seems to be inserted, its just that it takes those long pauses. I am not quite sure where to look for the problem. I tried profiling for long running methods but nothing showed up. I also looked at the server stats and it seems that the server is doing some cpu work, but no io.

The collection has about 1.5m documents, and the documents have about 15 indexes set.

Any hints on where to look are greatly appreciated!

Gerold

Kevin Adistambha

unread,
May 4, 2016, 10:12:47 PM5/4/16
to mongodb-user

Hi Gerold,

i am doing bulk upserts to a sharded cluster with bulks of 1000. Everything seems to work fine, except that in the middle of the script it suddenly pauses - sometimes just minutes, sometimes half an hour - and then proceeds normally.

Could you provide more information:

  • What do the logs show during this pause period? (i.e. mongod or mongos logs), and what is the exact command you are executing?
  • Please provide the output of sh.status() (to see your shard key, and chunks distribution)
  • What is your MongoDB server version, and client driver version, if applicable
  • What is your configured storage engine and its options
  • What is your O/S and the filesystem that contains the /data/db directory
  • What is your deployment topology (i.e. how many shards, do the shards consist of a replica set each, how many config servers, how many mongos, etc.)

Also, can you share a code snippet that performs the bulk operation, along with any write concern for the operation?

I tried profiling for long running methods but nothing showed up. I also looked at the server stats and it seems that the server is doing some cpu work, but no io.

A couple of questions:

  • How and on which server did you turn on the profiler?
  • On which server and using what command did you run the server stats? Could you show us some example output?

Best regards,
Kevin

Gerold Böhler

unread,
May 7, 2016, 6:19:12 AM5/7/16
to mongodb-user
Hi Kevin,

thanks for the reply! I think i provided everything you asked except for the logs, i have to wait until the problem happens again to send you those. Everything else should be attached though.

Thanks for looking into this,
Gerold


Am Donnerstag, 5. Mai 2016 04:12:47 UTC+2 schrieb Kevin Adistambha:

Hi Gerold,

i am doing bulk upserts to a sharded cluster with bulks of 1000. Everything seems to work fine, except that in the middle of the script it suddenly pauses - sometimes just minutes, sometimes half an hour - and then proceeds normally.

Could you provide more information:

  • What do the logs show during this pause period? (i.e. mongod or mongos logs), and what is the exact command you are executing?
I am using cloud manager to manage my setup, i have 3 virtual hosts (s25, s26, s27) and everything runs on my own hardware. Basically i have 3 collections:
  • landingpages
  • products
  • default
The "landingpages" collection is a sharded collection with 2 mongos and they run on s26 and s27. "products" and "default" are regular collections where both replica sets run on s25. In my import script i do bulk upserts on the products collection as well as on the landingpages collection. And since i noticed that s25 is starting to swap after a while and the cpu is also busy, i assume the problems are the upserts on the products collection. Also i have attached screenshots of my cloud manager setup.

So if i send you the output of mongodb.log for the primary of that replica set, is that ok or do you need the logs for the secondaries as well? 
 
  • Please provide the output of sh.status() (to see your shard key, and chunks distribution)
 --- Sharding Status --- 
  sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("56b9da368b5c94d1f64438e2")
}
  shards:
{  "_id" : "shard_0",  "host" : "shard_0/s26.xxx:27000,s26.xxx:27001,s27.xxx:27000" }
{  "_id" : "shard_1",  "host" : "shard_1/s26.xxx:27002,s27.xxx:27001,s27.xxx:27002" }
  active mongoses:
"3.2.6" : 2
  balancer:
Currently enabled:  yes
Currently running:  no
Failed balancer rounds in last 5 attempts:  1
Last reported error:  could not get updated shard list from config server due to interrupted at shutdown
Time of Reported error:  Thu May 05 2016 22:00:46 GMT+0200 (CEST)
Migration Results for the last 24 hours: 
No recent migrations
  databases:
{  "_id" : "xxx_production",  "primary" : "shard_0",  "partitioned" : true }
xxx_production.landingpages
shard key: { "filter_hash" : "hashed" }
unique: false
balancing: true
chunks:
shard_0 16
shard_1 16
too many chunks to print, use verbose if you want to force print

  • What is your MongoDB server version, and client driver version, if applicable
Right now i use 3.2.6. On the client side, i use mongoid 5.1.3.
  • What is your configured storage engine and its options
WiredTiger with default options (I couldn't find out how to get the options from the mongo shell). The only thing i changed was cacheSizeGb to 1GB for "default" and "products" because s25 used to run out of memory. 
  • What is your O/S and the filesystem that contains the /data/db directory
All Servers are running "Linux s25 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 GNU/Linux" on XFS 
  • What is your deployment topology (i.e. how many shards, do the shards consist of a replica set each, how many config servers, how many mongos, etc.)
See answer above

Also, can you share a code snippet that performs the bulk operation, along with any write concern for the operation?

I'm not sure if that makes sense because i build up an array which i do upsert then via the mongoid / ruby driver, i don't think you will get much out of this. 

I tried profiling for long running methods but nothing showed up. I also looked at the server stats and it seems that the server is doing some cpu work, but no io.

A couple of questions:

  • How and on which server did you turn on the profiler?
  • On which server and using what command did you run the server stats? Could you show us some example output?
Well i just looked at db.currentOp() but there was nothing that was looking strange to me. Also i did look at iostat output and there was nothing going on as well. I did all of this on all hosts. 
    So basically everything runs smoothly on the beginning but during a couple of days, s25 starts to use more and more memory. At the same time, the upserting takes longer and longer pauses. After a couple of days, the server starts to swap and i just stop the process that upserts documents and memory usage goes back to normal and the swap is freed. Then i start the import process again and the same thing happens all over again :-)

    Best regards,
    Kevin

    Regards,
    Gerold
    configs.jpg
    mongos.jpg
    shards.jpg

    Gerold Böhler

    unread,
    May 7, 2016, 7:28:54 AM5/7/16
    to mongodb-user
    Hi again,

    so i did manage to capture the log output during the pause and it seems to me as if the upserts in the products collection are quite expensive:


    2016-05-07T13:11:18.542+0200 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:33683 #1707 (23 connections now open)
    2016-05-07T13:11:21.064+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 656, category_1_id: 1324, category_2_id: 2560, color_0_id: 404, color_1_id: 444, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:5 nModified:5 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 278863, w: 278863 } }, Database: { acquireCount: { w: 278863 } }, Collection: { acquireCount: { w: 278777 } }, Metadata: { acquireCount: { w: 86 } }, oplog: { acquireCount: { w: 86 } } } 5678ms
    2016-05-07T13:11:26.727+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 696, category_1_id: 1942, color_0_id: 403, color_1_id: 435, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:22 nModified:4 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 288480, w: 288480 } }, Database: { acquireCount: { w: 288480 } }, Collection: { acquireCount: { w: 288390 } }, Metadata: { acquireCount: { w: 90 } }, oplog: { acquireCount: { w: 90 } } } 5662ms
    2016-05-07T13:11:32.368+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, brand_0_id: 390, gender_id: 641, category_0_id: 645, category_1_id: 1013, color_0_id: 415, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:19 nModified:3 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 298096, w: 298096 } }, Database: { acquireCount: { w: 298096 } }, Collection: { acquireCount: { w: 298003 } }, Metadata: { acquireCount: { w: 93 } }, oplog: { acquireCount: { w: 93 } } } 5641ms
    2016-05-07T13:11:38.024+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 657, category_1_id: 1341, color_0_id: 403, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:45 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 307710, w: 307710 } }, Database: { acquireCount: { w: 307710 } }, Collection: { acquireCount: { w: 307616 } }, Metadata: { acquireCount: { w: 94 } }, oplog: { acquireCount: { w: 94 } } } 5655ms
    2016-05-07T13:11:43.671+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, brand_0_id: 371, gender_id: 641, category_0_id: 669, color_0_id: 418, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:5 nModified:4 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 317327, w: 317327 } }, Database: { acquireCount: { w: 317327 } }, Collection: { acquireCount: { w: 317229 } }, Metadata: { acquireCount: { w: 98 } }, oplog: { acquireCount: { w: 98 } } } 5647ms
    2016-05-07T13:11:49.358+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 686, category_1_id: 1846, color_0_id: 403, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:5 nModified:5 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 326945, w: 326945 } }, Database: { acquireCount: { w: 326945 } }, Collection: { acquireCount: { w: 326842 } }, Metadata: { acquireCount: { w: 103 } }, oplog: { acquireCount: { w: 103 } } } 5686ms
    2016-05-07T13:11:54.990+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, brand_0_id: 356, gender_id: 642, category_0_id: 646, category_1_id: 1126, color_0_id: 403, color_1_id: 435, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:20 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 336559, w: 336559 } }, Database: { acquireCount: { w: 336559 } }, Collection: { acquireCount: { w: 336455 } }, Metadata: { acquireCount: { w: 104 } }, oplog: { acquireCount: { w: 104 } } } 5631ms
    2016-05-07T13:12:00.657+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 666, color_0_id: 403, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:163 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 346173, w: 346173 } }, Database: { acquireCount: { w: 346173 } }, Collection: { acquireCount: { w: 346068 } }, Metadata: { acquireCount: { w: 105 } }, oplog: { acquireCount: { w: 105 } } } 5667ms
    2016-05-07T13:12:06.317+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 651, color_0_id: 409, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:7 nModified:7 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 355793, w: 355793 } }, Database: { acquireCount: { w: 355793 } }, Collection: { acquireCount: { w: 355681 } }, Metadata: { acquireCount: { w: 112 } }, oplog: { acquireCount: { w: 112 } } } 5660ms
    2016-05-07T13:12:12.029+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 660, category_1_id: 1383, color_0_id: 403, color_1_id: 435, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:17 nModified:8 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 365414, w: 365414 } }, Database: { acquireCount: { w: 365414 } }, Collection: { acquireCount: { w: 365294 } }, Metadata: { acquireCount: { w: 120 } }, oplog: { acquireCount: { w: 120 } } } 5711ms
    2016-05-07T13:12:17.709+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 669, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:148 nModified:10 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 375037, w: 375037 } }, Database: { acquireCount: { w: 375037 } }, Collection: { acquireCount: { w: 374907 } }, Metadata: { acquireCount: { w: 130 } }, oplog: { acquireCount: { w: 130 } } } 5680ms
    2016-05-07T13:12:23.384+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 689, category_1_id: 1854, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:38 nModified:2 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 384652, w: 384652 } }, Database: { acquireCount: { w: 384652 } }, Collection: { acquireCount: { w: 384520 } }, Metadata: { acquireCount: { w: 132 } }, oplog: { acquireCount: { w: 132 } } } 5675ms
    2016-05-07T13:12:29.068+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 657, category_1_id: 1465, color_0_id: 403, color_1_id: 427, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:10 nModified:4 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 394269, w: 394269 } }, Database: { acquireCount: { w: 394269 } }, Collection: { acquireCount: { w: 394133 } }, Metadata: { acquireCount: { w: 136 } }, oplog: { acquireCount: { w: 136 } } } 5683ms
    2016-05-07T13:12:34.746+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 645, category_1_id: 1009, color_0_id: 415, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:170 nModified:5 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 403887, w: 403887 } }, Database: { acquireCount: { w: 403887 } }, Collection: { acquireCount: { w: 403746 } }, Metadata: { acquireCount: { w: 141 } }, oplog: { acquireCount: { w: 141 } } } 5677ms
    2016-05-07T13:12:40.396+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 646, category_1_id: 1120, color_0_id: 416, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:3 nModified:3 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 413503, w: 413503 } }, Database: { acquireCount: { w: 413503 } }, Collection: { acquireCount: { w: 413359 } }, Metadata: { acquireCount: { w: 144 } }, oplog: { acquireCount: { w: 144 } } } 5650ms
    2016-05-07T13:12:46.024+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 646, category_1_id: 1113, color_0_id: 407, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:46 nModified:2 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 423118, w: 423118 } }, Database: { acquireCount: { w: 423118 } }, Collection: { acquireCount: { w: 422972 } }, Metadata: { acquireCount: { w: 146 } }, oplog: { acquireCount: { w: 146 } } } 5627ms
    2016-05-07T13:12:51.671+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 706, category_1_id: 1984, category_2_id: 2972, color_0_id: 403, color_1_id: 429, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:39 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 432732, w: 432732 } }, Database: { acquireCount: { w: 432732 } }, Collection: { acquireCount: { w: 432585 } }, Metadata: { acquireCount: { w: 147 } }, oplog: { acquireCount: { w: 147 } } } 5646ms
    2016-05-07T13:12:57.339+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 646, category_1_id: 1124, color_0_id: 413, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:31 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 442346, w: 442346 } }, Database: { acquireCount: { w: 442346 } }, Collection: { acquireCount: { w: 442198 } }, Metadata: { acquireCount: { w: 148 } }, oplog: { acquireCount: { w: 148 } } } 5668ms
    2016-05-07T13:13:03.041+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 660, category_1_id: 1386, color_0_id: 413, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:147 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 451960, w: 451960 } }, Database: { acquireCount: { w: 451960 } }, Collection: { acquireCount: { w: 451811 } }, Metadata: { acquireCount: { w: 149 } }, oplog: { acquireCount: { w: 149 } } } 5701ms
    2016-05-07T13:13:08.655+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, brand_0_id: 371, gender_id: 641, category_0_id: 706, category_1_id: 1984, color_0_id: 403, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:5 nModified:4 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 461577, w: 461577 } }, Database: { acquireCount: { w: 461577 } }, Collection: { acquireCount: { w: 461424 } }, Metadata: { acquireCount: { w: 153 } }, oplog: { acquireCount: { w: 153 } } } 5614ms
    2016-05-07T13:13:14.332+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 641, category_0_id: 660, category_1_id: 1362, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:17 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 471191, w: 471191 } }, Database: { acquireCount: { w: 471191 } }, Collection: { acquireCount: { w: 471037 } }, Metadata: { acquireCount: { w: 154 } }, oplog: { acquireCount: { w: 154 } } } 5676ms
    2016-05-07T13:13:19.947+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 669, color_0_id: 403, color_1_id: 435, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:24 nModified:8 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 480812, w: 480812 } }, Database: { acquireCount: { w: 480812 } }, Collection: { acquireCount: { w: 480650 } }, Metadata: { acquireCount: { w: 162 } }, oplog: { acquireCount: { w: 162 } } } 5615ms
    2016-05-07T13:13:25.632+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, gender_id: 642, category_0_id: 646, category_1_id: 1131, color_0_id: 415, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:10 nModified:1 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 490426, w: 490426 } }, Database: { acquireCount: { w: 490426 } }, Collection: { acquireCount: { w: 490263 } }, Metadata: { acquireCount: { w: 163 } }, oplog: { acquireCount: { w: 163 } } } 5684ms
    2016-05-07T13:13:31.296+0200 I WRITE    [conn1663] update xxx_production.products query: { filter_hash: { main_category_id: 2, brand_0_id: 386, gender_id: 641, category_0_id: 696, category_1_id: 1942, color_0_id: 402, shop_id: 19009, invoice_id: 18880, free_shipping_id: 639 } } update: { $set: { landingpages_created: true, state: "deployed" } } keysExamined:0 docsExamined:1230458 nMatched:3 nModified:2 keyUpdates:2 writeConflicts:0 numYields:9612 locks:{ Global: { acquireCount: { r: 500041, w: 500041 } }, Database: { acquireCount: { w: 500041 } }, Collection: { acquireCount: { w: 499876 } }, Metadata: { acquireCount: { w: 165 } }, oplog: { acquireCount: { w: 165 } } } 5663ms
    2016-05-07T13:13:32.737+0200 I COMMAND  [conn1434] command xxx_production.products command: find { find: "products", filter: { filter_hash.main_category_id: 2, filter_hash.gender_id: 641, filter_hash.category_0_id: 689, filter_hash.category_1_id: 1858, filter_hash.color_0_id: 415, state: "deployed" }, sort: { click_count: -1, product_hash.shop_updated_at: -1 } } planSummary: IXSCAN { filter_hash.color_0_id: 1 } cursorid:48665440506 keysExamined:168242 docsExamined:168242 hasSortStage:1 keyUpdates:0 writeConflicts:0 numYields:1315 nreturned:101 reslen:543730 locks:{ Global: { acquireCount: { r: 2632 } }, Database: { acquireCount: { r: 1316 } }, Collection: { acquireCount: { r: 1316 } } } protocol:op_query 2220ms




    I have to admin, the products collection has quite some indexes and currently there are 1.2mio documents in the collection:



    products
    :PRIMARY> db.products.getIndexes()
    [
    {
    "v" : 1,
    "key" : {
    "_id" : 1
    },
    "name" : "_id_",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "feed_id" : 1
    },
    "name" : "product_feed_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "state" : 1
    },
    "name" : "product_state_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "landingpages_created" : 1
    },
    "name" : "product_landingpages_created",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "generic_sku" : 1
    },
    "name" : "product_generic_sku_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "sku" : 1
    },
    "name" : "product_sku_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "sha" : 1
    },
    "name" : "product_sha_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "feed_id" : 1,
    "generic_sku" : 1,
    "sku" : 1
    },
    "name" : "product_feed_generic_sku_index",
    "ns" : "xxx_production.products",
    "sparse" : true
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.main_category_id" : 1,
    "click_count" : -1,
    "product_hash.shop_updated_at" : -1
    },
    "name" : "product_sort_best_and_new_products_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.main_category_id" : 1,
    "product_hash.price" : 1,
    "click_count" : -1,
    "product_hash.shop_updated_at" : -1
    },
    "name" : "product_sort_low_prices_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.main_category_id" : 1,
    "product_hash.price" : -1,
    "click_count" : -1,
    "product_hash.shop_updated_at" : -1
    },
    "name" : "product_sort_high_prices_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.main_category_id" : 1,
    "product_hash.reduction" : -1,
    "click_count" : -1,
    "product_hash.shop_updated_at" : -1
    },
    "name" : "product_sort_reduction_index",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.main_category_id" : 1
    },
    "name" : "filter_hash.main_category_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.merchandise_id" : 1
    },
    "name" : "filter_hash.merchandise_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.brand_0_id" : 1
    },
    "name" : "filter_hash.brand_0_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.brand_1_id" : 1
    },
    "name" : "filter_hash.brand_1_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.gender_id" : 1
    },
    "name" : "filter_hash.gender_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_0_id" : 1
    },
    "name" : "filter_hash.category_0_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_1_id" : 1
    },
    "name" : "filter_hash.category_1_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_2_id" : 1
    },
    "name" : "filter_hash.category_2_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_3_id" : 1
    },
    "name" : "filter_hash.category_3_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_4_id" : 1
    },
    "name" : "filter_hash.category_4_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.category_5_id" : 1
    },
    "name" : "filter_hash.category_5_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.package_id" : 1
    },
    "name" : "filter_hash.package_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.reduced_id" : 1
    },
    "name" : "filter_hash.reduced_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.color_0_id" : 1
    },
    "name" : "filter_hash.color_0_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.color_1_id" : 1
    },
    "name" : "filter_hash.color_1_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.material_id" : 1
    },
    "name" : "filter_hash.material_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.pattern_id" : 1
    },
    "name" : "filter_hash.pattern_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.size_id" : 1
    },
    "name" : "filter_hash.size_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.bio_id" : 1
    },
    "name" : "filter_hash.bio_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.shop_id" : 1
    },
    "name" : "filter_hash.shop_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.invoice_id" : 1
    },
    "name" : "filter_hash.invoice_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.free_shipping_id" : 1
    },
    "name" : "filter_hash.free_shipping_id",
    "ns" : "xxx_production.products"
    },
    {
    "v" : 1,
    "key" : {
    "filter_hash.coupon_id" : 1
    },
    "name" : "filter_hash.coupon_id",
    "ns" : "xxx_production.products"
    }
    ]


    Is there a way to improve the upsert performance? 

    Gerold Böhler

    unread,
    May 7, 2016, 8:04:24 AM5/7/16
    to mongodb-user
    Ok thanks for listening but i figured it out myself - just missed the index on filter_hash on the products collection. I didn't know about that logfile so thanks for the tipp ;-)


    Am Donnerstag, 5. Mai 2016 04:12:47 UTC+2 schrieb Kevin Adistambha:
    Reply all
    Reply to author
    Forward
    0 new messages