Hi,
To be able to support advanced features like causal consistency and retryable writes, the config.system.sessions
collection must be used, as you have found. However, there is some issue regarding certain drivers “leaking” sessions as described in DRIVERS-453. If you’re using the affected driver, could you upgrade your driver version and see if the updates to config.system.sessions
is still causing a measurable impact to your workload?
Best regards,
Kevin
When a user creates a session on amongod
ormongos
instance, the record of the session initially exists only in-memory on the instance; i.e. the record is local to the instance. Periodically, the instance will sync its cached sessions to thesystem.sessions
collection in theconfig
database, at which time, they are visible to$listSessions
and all members of the deployment. Until the session record exists in thesystem.sessions
collection, you can only list the session via the$listLocalSessions
operation.
Hi Daniel,
As seen in the screenshot, it looks like all the mongod/mongos nodes are updating this collection at pretty much the same time. Is this timed by a global cluster time? I would expect to see these updates a bit staggered, and not having all the cluster members issuing the updates at the same time.
The updates into the config.system.sessions
collection is timed by the mongod
process; it is basically a scheduled task that runs at a predetermined time. Thus in a sharded cluster, the updates should be staggered as you expect since each individual mongod
would have its own schedule.
If you’re finding that the updates are occurring at the same time all the time, you might want to check that your config.system.sessions
collection is properly sharded. The output of sh.status()
should show you.
Another question: does the extensive use of cursors is negatively impacting this behaviour?
Technically yes, but in reality we haven’t seen updates into this collection to be detrimental to the cluster’s performance in general. Multiple clients performing operations on the cluster are easily much more demanding, resource-wise.
Is there a server option that controls the rate at which the cached sessions on each node are synced to this collection?
I don’t believe the rate is controllable. Generally, this functionality is given by the logicalSessionRefreshMillis
parameter.
Best regards,
Kevin
If you’re finding that the updates are occurring at the same time all the time, you might want to check that yourconfig.system.sessions
collection is properly sharded. The output ofsh.status()
should show you.
{ "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: scstage-eastus2-ReplSet1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : sctage-eu-west-ReplSet1 Timestamp(1, 0)
db.system.sessions.stats(){ "ns" : "config.system.sessions", "size" : 987075,
"count" : 5777,
"avgObjSize" : 170, "storageSize" : 1765376,
Technically yes, but in reality we haven’t seen updates into this collection to be detrimental to the cluster’s performance in general. Multiple clients performing operations on the cluster are easily much more demanding, resource-wise.
mongos> db.system.sessions.find(){ "_id" : { "id" : UUID("8d4e1767-3389-4cf1-b965-acfb55123a7e"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:58:47.473Z") }{ "_id" : { "id" : UUID("9f6de236-0dea-45cd-823a-e1dda73d22ad"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:53:49.530Z") }{ "_id" : { "id" : UUID("f2da4b55-8f69-4f82-868e-063ae8efc78e"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:43:41.638Z") }{ "_id" : { "id" : UUID("c9f578ab-3c14-487d-b144-7ed5d2ad3fab"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:58:36.380Z") }{ "_id" : { "id" : UUID("dd77e664-c32f-4388-93d3-318e0b043d5c"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:43:41.638Z") }{ "_id" : { "id" : UUID("b3e03d60-6e03-4557-b19b-8aa8b1936a88"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:48:47.468Z") }{ "_id" : { "id" : UUID("df5e26c7-17fc-46f0-833d-34d47e054c6f"), "uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=") }, "lastUse" : ISODate("2019-04-11T09:53:47.467Z") }....
Hi Daniel,
There is an internal method to maintain the config.system.sessions
collection, and it supposed to maintain the collection and shard it as necessary. I don’t know why this process seems to fail in your case. In my opinion, it’s best not to try to perform manual maintenance on the collection. I think if you have a lot of sessions, eventually the collection will be automatically split and distributed among the shards. You can maybe try to induce the split by using e.g. only one mongos
process during a load test to reduce variability.
Best regards,
Kevin