Thanks Austin,
The application workload is a kind of fragmentation torture test as it
involves a mixture of many long-lived small and large (>100 MB)
objects, with regularly allocated short-lived small and large objects.
I have tried creating a sample synthetic reproducer but did not
succeed at the moment.
Regarding the max_map_count, your explanation is very clear and I
apparently missed the large comment in the runtime explaining all of
that.
Do you expect a significant drawback between choosing 2MB or 16MB as
the granularity of the huge page flag manipulation in the case of huge
heaps ?
Regarding the virtual memory footprint, it changed radically with Go
1.12. It basically looks like a leak and I saw it grow to more than
1TB where the actual heap total size never exceeds 180GB.
Although I understand that it is easy to construct a situation where
there is repeatedly no available contiguous interval of >100MB in the
address space, it is a significant difference from Go 1.11 where the
address space would grow to 400-500GB for a similar workload and stay
flat after that, and I could not find an obvious change in the
allocator explaining the phenomenon (and unfortunately my resources do
not allow for an easy live comparison of both program lifetimes).
Am I right saying that scavenging method or frequency does not
(cannot) affect at all virtual memory footprint and dynamics ?
Regards,
Rémy.