Mapping between the virtual address returned after allocation and actual persistent memory address

195 views
Skip to first unread message

Tonmoy Dey

unread,
May 24, 2019, 4:53:27 PM5/24/19
to pmem
I need to find a way to create a mapping between the virtual address returned after allocation and the actual persistent memory address.

I plan to access the byte addressable memory of the persistent memory and map it to the virtual address returned so that in the future I can rearrange allocations just by changing the actual location where the virtual address points to. 
Simply put I need to perform compaction of memory later, for which if I can find a mapping mechanism it would be possible to rearrange objects.

Kindly let me know if you are aware of such functionality in PMDK or where I can find such information.

Andy Rudoff

unread,
May 27, 2019, 7:39:17 PM5/27/19
to pmem
Hi,

On Linux, the mmap() system call provides the mapping of a range of persistent memory into your address space, and you are free to map the pages in any order you like (or update them during some sort of defragmentation operation, like you suggest).  Remember that mmap() takes the *offset* into the thing you have opened (for example, a persistent memory file on a DAX file system), so you have all the information you need to rearrange the mappings.

For example, let's say you created a 2G file on a DAX file system.  Then you created a memory mapping of the pages of that file, so that the first page of your mapping is at offset zero in the file, the second page is at offset 4096, and so on.  Later, you decide during defrag that you want the persistent memory at offset zero in the file to appear as the one thousandth page, so you'll call mmap() providing the file offset zero, the size of 4096 (one page) and the new virtual address you want, along with the MAP_FIXED flag.

No, PMDK doesn't do any type of remapping like this.  For one thing, MAP_FIXED mappings are tricky/error prone to deal with, especially when you want to maintain large pages (2M or 1G).  But also, this type of defragmentation only works for memory allocations that do not cross over page boundaries.

For libpmemobj, where our allocator hands back opaque "object IDs" instead of addresses, it is possible for us to add a defragger, moving the allocations even if they are not on page boundaries.  We have talked about it, but so far the complexity has outweighed the need.  Who knows what the future will bring.

For volatile allocations, similar to those provided by libvmem, we found that fragmentation could be avoided by using a key-value type interface instead of a malloc/free interface.  See this blog entry, where we describe a solution we built around this idea: http://pmem.io/2019/05/07/libvmemcache.html

-andy

Mimo memo

unread,
Jul 9, 2021, 4:18:53 AM7/9/21
to pmem
hi, 
just follow this question here.  Please correct me if I am on the wrong way. 
A case we encounter in customer is that they are using pmem in such way:
     1. devdax mode (no fs) for random access
     2.  no -volatile way.
     3. multiple threads to do I/O on the same pmem mmap space for concurrency.
     4. self defined allocation and free on pmem. 

As it's said, pmem namespace is aligned in a page size of  i.e. 2MB,  if there are 3 threads, t1, t2, t3. A good way to reduce the fragmentation in this namespace could be like this:
   t1 mmap to file + offset 0 in virtual address, 
   t2 mmap to file + offset 2MB in virtual address,
   t3 mmap to file + offset  4MB, 
Then they do allocation and free within the pagesize domain(2MB).   This kind of mmap could be better in avoiding fragmentation in physical address than mmap to file + 1MB or so which cross two pages in pmem alignment.    
Am I right? 
based on my understanding, if the physical address is not continuous  in  pmem due to not aligned allocation, the pmem page swap in/out  may encounter the similar problem as memory in OS kernel. For example, if  pmem physical address is truncated like below, it may impossible for kernel to allocate new 2MB pmem page.
     |   1MB free |--allocated 1MB-| 1MBfree | -- allocated1MB--| ........|

Andy Rudoff

unread,
Jul 9, 2021, 10:36:04 AM7/9/21
to pmem
Hi,

I'm not sure I understand the question you are asking.  It isn't clear to me what you mean by "allocation" in this context -- are you saying each allocation is a call to mmap()?  Typically applications mmap the entire devdax capacity and then keep track of what's allocated and what's not allocated using memory allocator data structures similar to how malloc libraries work.

Anyway, if your goal is to prevent 2MB pages from being broken up into smaller pages, then you should only mmap() on 2MB boundaries and sizes that are multiples of 2MB (one thread or many threads doesn't make any difference).  Not sure if that's your question though.

-andy

yufan jia

unread,
Jul 18, 2022, 10:33:32 AM7/18/22
to pmem
Hi,
I am a novice just starting my study in PMEM. This discussion of mapping in pm is very close to the problem I am currently having, and I was wondering if you have solved this problem and if so, what method was used. 
I know a long time has passed and I'm sorry to bother you, but this is very important to me and I hope to get your reply.
regards.
Yufan Jia

Reply all
Reply to author
Forward
0 new messages