--
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pmem+uns...@googlegroups.com.
To post to this group, send email to pm...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/CAPSEm%3DYQU9Q-%2BNghwsR0KFUBzq7g%3DShRKvKLJ9LxbhZF1ObVEQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pmem+uns...@googlegroups.com.
To post to this group, send email to pm...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/e41e1268-8732-47e8-988b-6aae89c9eedf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks Steve!. I am going to do a similar thing. Separate file system for each NUMA socket + NUMA aware threads.One another question on the same lines. Is there a way to stitch together a set of PM pages that are both virtually and physically contiguous.?I have seen tricks played with DRAM where they query sysfs mmapped addresses to find starting physical addresses of the mapped pages to figure out physically contiguous pages.I could not figure out similar tricks, with PM device mapping.thanks--Pradeep
On Tue, Mar 26, 2019 at 10:37 AM Steve Scargall <steve.s...@gmail.com> wrote:
--
On Tuesday, 26 March 2019 07:58:27 UTC-6, Pradeep Fernando wrote:Hi Anton,>> Do you need to create a single DAX enabled file system across the sockets? Or can you get away with creating multiple file systems, one for each NUMA region?This is a very good suggestion. I am building an application that uses PMDK allocator. Want a way to allocate memory from different NUMA devices.The most common solution is to create pools of worker threads where the pool threads are NUMA bound to the CPU sockets closest to the pmem and DRAM that they will be accessing. The app then knows which worker thread pool to use to access different data.We do have an open enhancement/feature request to support cross pool transactions - https://github.com/pmem/issues/issues/988
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pmem+unsubscribe@googlegroups.com.
To post to this group, send email to pm...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/e41e1268-8732-47e8-988b-6aae89c9eedf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--Pradeep Fernando.
Hi Pradeep,Can you say a little more on why you want the physical pages to be contiguous? If it is to use large pages, fs DAX already makes an effort to do contiguous allocations and use large page mappings when possible. And the "device DAX" mode on Linux will ensure you get large pages, but at the cost of not being able to manage pmem using file operations (so no names, permissions, etc). For most use cases I've seen, the opportunistic use of large pages already in the kernel is sufficient with fs dax and not something the app has to worry about. But of course each use case if different so perhaps you have another reason for wanting the pages to be physically contiguous...
-andy
On Tuesday, March 26, 2019 at 9:52:49 AM UTC-6, Pradeep Fernando wrote:
Thanks Steve!. I am going to do a similar thing. Separate file system for each NUMA socket + NUMA aware threads.One another question on the same lines. Is there a way to stitch together a set of PM pages that are both virtually and physically contiguous.?I have seen tricks played with DRAM where they query sysfs mmapped addresses to find starting physical addresses of the mapped pages to figure out physically contiguous pages.I could not figure out similar tricks, with PM device mapping.thanks--Pradeep
On Tue, Mar 26, 2019 at 10:37 AM Steve Scargall <steve.s...@gmail.com> wrote:
--
On Tuesday, 26 March 2019 07:58:27 UTC-6, Pradeep Fernando wrote:Hi Anton,>> Do you need to create a single DAX enabled file system across the sockets? Or can you get away with creating multiple file systems, one for each NUMA region?This is a very good suggestion. I am building an application that uses PMDK allocator. Want a way to allocate memory from different NUMA devices.The most common solution is to create pools of worker threads where the pool threads are NUMA bound to the CPU sockets closest to the pmem and DRAM that they will be accessing. The app then knows which worker thread pool to use to access different data.We do have an open enhancement/feature request to support cross pool transactions - https://github.com/pmem/issues/issues/988
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pmem+uns...@googlegroups.com.
To post to this group, send email to pm...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/e41e1268-8732-47e8-988b-6aae89c9eedf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
----Pradeep Fernando.
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pmem+uns...@googlegroups.com.
To post to this group, send email to pm...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/7f264594-53ab-4205-9960-d0f869818230%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.