Hi All.
Here's my talk proposal for June 29th,2024 meetup.
Thanks,
--Nilay
Title: Improve NVMe multipath IO performance on numa aware system
Abstract:
NVMe (nonvolatile memory express) is a storage access and transport
protocol for flash and next-generation solid-state drives (SSDs) that
delivers the highest throughput and fastest response times yet for all
types of enterprise workloads. Today, in both consumer apps and business,
users expect ever-faster response times. To help deliver a high-bandwidth,
low-latency user experience, the NVMe protocol accesses flash storage via a
PCI Express (PCIe) bus, which supports tens of thousands of parallel command
queues and thus is much faster than hard disks and traditional all-flash
architectures, which are limited to a single command queue.
The NVMe spec 1.1 added support for multipath I/O (with namespace sharing
capability) for further improving performance. The NVMe native multipath
implementation in the kernel supports different IO policies for different workloads.
One of those IO policies is NUMA. Typically, on a numa aware system, the user
would choose NUMA as the default IO policy.
In this talk I would cover, the performance issue we faced on numa-aware
system with NVMe multipath. I would then cover how we approached fixing the
performance issue which helped us improve the NVMe multipath performance
(upto ~12%) on a three numa-node system. The performance gain would be even
higher as the numa-node count increases.
Agenda: This talk will cover the following points:
1. Brief history/background about NVMe
2. The NVMe native multipath design in kernel
3. Discuss different IO policies supported by NVMe native multipath driver code
4. Show the performance impact of the NUMA IO policy on PPC and how it was addressed to
improve NVMe multi-controller disk performance.
5. Describe the other open issues in this space and get feedback from the forum
Talk Preference: Regular Talk
Reference:
https://lore.kernel.org/all/20240416082102...@linux.ibm.com/https://lore.kernel.org/all/20240516121358....@linux.ibm.com/https://lore.kernel.org/all/20240517142531....@linux.ibm.com/