Yes, I still see it. As an additional test, I tried gradual removal like this:
- insert 1.0 million docs, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- insert 0.5 million docs, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
- insert 0.5 million docs, then sleep 3min.
- remove 0.1 million docs randomly, then sleep 3min.
:
But I've got the same result.
"file bytes available for reuse" : 923234304, <-- too large!!
"file size in bytes" : 1483091968
Why wiredTiger won't reuse such large unused space...
Should we know wiredTiger's page allocation and management in depth?
Regards,
2016年4月29日金曜日 11時58分22秒 UTC+9
michael...@10gen.com: