File handles ? How many open connections do have at peak memory ?
Beg your pardon. There were 123,000 connection established in total, but there are only 5 or 6 open at the time the node runs out of RAM.
I can't see any evidence of file handles growing out of control. At startup the TSD holds 323. By the time 2GB of RAM has gone 'missing', the number is fluctuating between 330 and 342. This is consistent with tsd.connectionmgr.connections, which is telling me there are between 4 and 20 connections active.
Yeah, the NIO MXBean should show direct buffer allocated memory, but I thought that since most ( all ?) The Netty channel buffers in the tsd are heap based, there wouldn't be that many direct byte buffers.
Could a large class allocation be a consequence of a large number of UIDs?
Hi. Thanks for the tip.It's not the heap size which is growing - that's under control. What I see is that the off-heap allocation grows over time. By far the largest component of that is the "Class" category, according to the JVMs own Native Memory Tracking: Class allocation (~1G) is roughly half the max heap size (~2G). I'm wondering if this is linked to the size of the UID cache: I have getting on for 500k UIDs?
Also I am on MapR-DB, not HBase - there are no "No Such Region Events" recorded by the TSDs, and this probably reflects that MapR-DB is lightning fast a region failover, splitting and compaction type activity.
I might have another use case for a spooling plug-in though... do you know if such a plugin exists already?