Paper: Memory compaction without relocation

60 views
Skip to first unread message

Richard Townsend

unread,
Feb 18, 2019, 2:51:32 AM2/18/19
to platform-architecture-dev
Hi platform-architecture! 

I was reading HN on the weekend and it featured this paper from Powers et al about their MESH allocator:


To summarise: they achieved a 16% RSS reduction for Firefox running Speedometer 2.0 by cleverly remapping anonymous files and combining pages (this ensures that memory can be compacted without changing the virtual addresses), whilst suffering only a small performance reduction (1% or so). Whilst the implementation is probably unsuitable for Chromium (since it doesn’t use malloc/free very much), it could be promising for stuff like the partition allocator. What do you folks think? Is anybody looking at this?

Best
Richard

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Kentaro Hara

unread,
Feb 19, 2019, 12:14:43 AM2/19/19
to Richard Townsend, memory-dev, platform-architecture-dev
This is super interesting!

If we implement MESH on PageAllocator, Oilpan, PartitionAlloc and V8 can get the benefit :)

However, what's a security implication of MESH? We don't want to put ArrayBuffers and Nodes on the same (physical) page, which is why PartitionAlloc segregates heaps by partitions. Maybe can MESH be applicable only to pages inside the same partition?



--
You received this message because you are subscribed to the Google Groups "platform-architecture-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to platform-architect...@chromium.org.
To post to this group, send email to platform-arc...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/platform-architecture-dev/VI1PR08MB298996AAE8A352D580916B689C630%40VI1PR08MB2989.eurprd08.prod.outlook.com.


--
Kentaro Hara, Tokyo, Japan

Yuta Kitamura

unread,
Feb 22, 2019, 1:11:06 AM2/22/19
to Kentaro Hara, Richard Townsend, memory-dev, platform-architecture-dev
I quickly read the paper, and this is indeed interesting.

The algorithm appears to work on pages where the same sized objects reside, so essentially haraken's concern does not apply or can be easily circumvented.

I have a concern, however: won't the number of memory mapping areas (VMAs) explode? The algorithm seems to remap mappings on the page granularity, we might hit limits like vm.max_map_count (the maximum number of VMAs in a process, 65536 by default).

Yuta Kitamura

unread,
Feb 22, 2019, 1:28:00 AM2/22/19
to Kentaro Hara, Richard Townsend, memory-dev, platform-architecture-dev
Thought about this more.

This algorithm essentially defragments the user memory at the expense of fragmenting the VMAs in the kernel.

Assume max_map_count is 65536 (default) and you have maximally fragmented mappings, you can *only* have 65536 * 4096 = 256MB of memory. This is the worst case number, so if we keep having our 2GB limits we probably can use it. For an app that uses much more memory, the applicability of the algorithm can be more questionable.
Reply all
Reply to author
Forward
0 new messages