While checking FIO with x16 gen3 PCI card with 32GB of memory, it shows 20GB/s bandwidth for sequential read in cache-enabled system. This output is way more than PCI spec.
What are all check points we need to check for this issue?
Is this a problem of FIO calculating bandwidth wrongly?
I am using dev-dax.fio config file. While debugging i saw , same physical address is being mapped for read thread but different for write threads,
what might be the issue
I am not sure if we see the similar read performance issue on the BTT with /dev/pmem0.s. When we create the sector namespace and then do the multi-job seq read, we will see the read performance is above the limitation of the Pmem while if we are using the pcm_memory.x tool to monitor the Pmem bandwidth, it is small and close to one job bandwidth, so might the read map to the same physical address. We will start to take a look as well.
--
You received this message because you are subscribed to the Google Groups "pmem" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pmem+uns...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/pmem/1ecc8fe8-25ef-4684-9384-68d39339a1b7n%40googlegroups.com.
BTT is the block translation table which can be found: https://docs.kernel.org/driver-api/nvdimm/btt.html?highlight=btt
Then you need to create the namespace with ndctl create-namespace –mode=sector then it will expose the /dev/pmem0.s in your system. If you run the read performance it will over the limitation of the PMem.
BR,
Dennis W
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/0a78b750-ff91-459e-9b07-bad7e268d88dn%40googlegroups.com.