Percona Mongodb 3.4 in-memory estimation

93 views
Skip to first unread message

david....@logitravelgroup.com

unread,
Feb 28, 2017, 10:02:49 AM2/28/17
to Percona Discussion
We made a test with Percona Mongodb 3.4 in-memory storage.
We load a sample database with 100000 docs. We see the following stats at database and server level (mongod --storageEngine inMemory --dbpath /var/lib/mongodb/ --inMemorySizeGB 3 --inMemoryStatisticsLogDelaySecs 0):
> db.stats()
{
"db" : "nexus",
"collections" : 1,
"views" : 0,
"objects" : 100000,
"avgObjSize" : 825.80737,
"dataSize" : 82580737,
"storageSize" : 82580737,
"numExtents" : 0,
"indexes" : 1,
"indexSize" : 6600000,
"ok" : 1
}
> db.serverStatus().inMemory.cache
{
"application threads page read from disk to cache count" : 0,
"application threads page read from disk to cache time (usecs)" : 0,
"application threads page write from cache to disk count" : 0,
"application threads page write from cache to disk time (usecs)" : 0,
"bytes belonging to page images in the cache" : 3170107,
"bytes currently in the cache" : 108483792,
"bytes not belonging to page images in the cache" : 105313684,
"bytes read into cache" : 0,
"bytes written from cache" : 0,
"checkpoint blocked page eviction" : 0,
"eviction calls to get a page" : 0,
"eviction calls to get a page found queue empty" : 0,
"eviction calls to get a page found queue empty after locking" : 0,
"eviction currently operating in aggressive mode" : 0,
"eviction empty score" : 0,
"eviction server candidate queue empty when topping up" : 0,
"eviction server candidate queue not empty when topping up" : 0,
"eviction server evicting pages" : 0,
"eviction server slept, because we did not make progress with eviction" : 0,
"eviction server unable to reach eviction goal" : 0,
"eviction state" : 16,
"eviction walks abandoned" : 0,
"eviction worker thread evicting pages" : 0,
"failed eviction of pages that exceeded the in-memory maximum" : 0,
"files with active eviction walks" : 0,
"files with new eviction walks started" : 0,
"hazard pointer blocked page eviction" : 0,
"hazard pointer check calls" : 12,
"hazard pointer check entries walked" : 0,
"hazard pointer maximum array length" : 0,
"in-memory page passed criteria to be split" : 22,
"in-memory page splits" : 11,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 1,
"lookaside table insert calls" : 0,
"lookaside table remove calls" : 0,
"maximum bytes configured" : 3758096384,
"maximum page size at eviction" : 5242897,
"modified pages evicted" : 1,
"modified pages evicted by application threads" : 0,
"overflow pages read into cache" : 0,
"overflow values cached in memory" : 0,
"page split during eviction deepened the tree" : 0,
"page written requiring lookaside records" : 0,
"pages currently held in the cache" : 270,
"pages evicted because they exceeded the in-memory maximum" : 12,
"pages evicted because they had chains of deleted items" : 0,
"pages evicted by application threads" : 0,
"pages queued for eviction" : 0,
"pages queued for urgent eviction" : 0,
"pages queued for urgent eviction during walk" : 0,
"pages read into cache" : 0,
"pages read into cache requiring lookaside entries" : 0,
"pages requested from the cache" : 200253,
"pages seen by eviction walk" : 0,
"pages selected for eviction unable to be evicted" : 0,
"pages walked for eviction" : 0,
"pages written from cache" : 0,
"pages written requiring in-memory restoration" : 0,
"percentage overhead" : 8,
"tracked bytes belonging to internal pages in the cache" : 27496,
"tracked bytes belonging to leaf pages in the cache" : 108456296,
"tracked dirty bytes in the cache" : 108481917,
"tracked dirty pages in the cache" : 262,
"unmodified pages evicted" : 0
}

After that, we made the same test with wiredtiger and we see the following stats:
> db.stats()
{
"db" : "nexus",
"collections" : 1,
"views" : 0,
"objects" : 100000,
"avgObjSize" : 825.80737,
"dataSize" : 82580737,
"storageSize" : 29032448,
"numExtents" : 0,
"indexes" : 1,
"indexSize" : 3448832,
"ok" : 1
}

We expected to see that inmemory size was similar to wiredTiger storageSize (+ oplog) based in the following link: https://www.percona.com/blog/2016/11/18/wiredtiger-b-tree-vs-wiredtiger-in-memory-q-a/

"Q: What is the difference in size of data between WiredTiger on disks versus WiredTiger In-Memory?
A: There is no difference: the size is same. Please note that WiredTiger (on which the Percona Memory Engine is based) itself can additionally allocate up to 50% of the amount specified in the --inMemorySize option."

But we see that in-Memory has "bytes currently in the cache" : 108483792, then 103 MB (much more than wiredTiger + 50% + oplog)

Do I miss something?

Regards,

Reply all
Reply to author
Forward
0 new messages