On Thu, 9 Jun 2022 21:10:48 +0300, Anton Gavriliuk <antos...@gmail.com> wrote:
>My apologies that this is not pmem specific topic yet.
>
>Is there anyone in this group interested/working on CXL ?
Me!
>I'm very interested in CXL, have a lot of questions and it would be great
>to meet these people.
Same here.
>Anton
------------
Steve Heller
Hi Daniel,
Any idea how much faster the ASIC might be? Obviously that is WAY too much latency for fast storage
devices like DRAM or even Optane persistent memory.
On Thu, 9 Jun 2022 16:40:29 -0700, Daniel Waddington <waddy...@gmail.com> wrote:
>We (IBM Research) have run FPGA-based versions of a CXL Type 3 memory
>which are coming in around 500-700ns. We would expect this to be
>slower than an ASIC alternative.
>
>Daniel
>
>On Thu, Jun 9, 2022 at 2:10 PM Anton Gavriliuk <antos...@gmail.com> wrote:
>>
>> Thank you Steve
>>
>> I mainly interested in CXL.cache & CXL.mem
>>
>> On my 2 sockets box, using Intel’s Memory Latency Checker (mlc) I see local DRAM latency ~82ns & remote DRAM latency ~145ns
>>
>> [root@memverge anton]# ./Linux/mlc --latency_matrix
>> Intel(R) Memory Latency Checker - v3.9a
>> Command line parameters: --latency_matrix
>>
>> Using buffer size of 2000.000MiB
>> Measuring idle latencies (in ns)...
>> Numa node
>> Numa node 0 1
>> 0 82.0 145.7
>> 1 145.3 81.8
>>
>> [root@memverge anton]#
>>
>> So for CXL1.1 & PCIe gen5 it would be great to know - what latency for CXL attached DRAM memory ?
>>
>> Could anybody run mlc on CXL setup ?
>>
>> Anton
>>
>> ??, 9 ???. 2022 ?. ? 23:02, steve <st...@steveheller.org>:
>>>
>>> On Thu, 9 Jun 2022 21:10:48 +0300, Anton Gavriliuk <antos...@gmail.com> wrote:
>>>
>>> >My apologies that this is not pmem specific topic yet.
>>> >
>>> >Is there anyone in this group interested/working on CXL ?
>>>
>>> Me!
>>>
>>> >I'm very interested in CXL, have a lot of questions and it would be great
>>> >to meet these people.
>>>
>>> Same here.
>>>
>>> >Anton
>>> ------------
>>> Steve Heller
>>
>> --
>> You received this message because you are subscribed to the Google Groups "pmem" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to pmem+uns...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/CAAiJnjosOSXnge58AoJqXpmNecmnTj%2BfWUvJHY00Fut9unPh4g%40mail.gmail.com.
------------
Steve Heller
On Fri, 10 Jun 2022 10:29:24 -0700, Daniel Waddington <waddy...@gmail.com> wrote:
>Tricky question. I would guess around 100ns + latency of attached memory.
In that case it shouldn't be that much worse than NUMA latency.
------------
Steve Heller
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/CAAiJnjppnEs5ADBQ8YBf5JO3RzWQtS0R%2BtcjL__nQqcbhKSbjw%40mail.gmail.com.
FYI,
Regarding the capacity question, not CXL attach…
Intel memory population rules do not allow more than one DCPMM module on each memory controller. So max 6 per socket on Cascade Lake and 8 per socket on Ice Lake CPUs. Dual-socket systems thus max at 8TiB of persistent memory. I have not heard any Intel plans to release Optane DIMM capacities above 512GiB. Of course there are Optane-based block NVMe SSDs up to 3.2TB:
HPE’s Superdome Flex supports 6 Optane DIMMs per socket, up to 16 sockets, so max 48TiB within one OS footprint using 512GiB DCPMM modules (plus whatever DRAM is configured):
Also Smart Modular recently released their Kestrel PCIe 4.0 card with embedded FPGA that supports up to 2TB of Optane in memory mode on one PCIe slot. I don’t know how many of these can be used in a server at once, or how their capacity might be additive to DIMM-based modules:
https://www.smartm.com/product/advanced-memory/AIC
--Lance
To view this discussion on the web visit https://groups.google.com/d/msgid/pmem/CAAiJnjoXC0r8RjrmmM0Qi_BeW1fa2mEJrF_2fG_SEFkTZEdHFQ%40mail.gmail.com.
Per Steve’s comment (forked thread), I agree the market is small for huge persistent memories at this point – due primarily to cost and ecosystem enablement (kinda like the early days of flash). In short, I see it handy for metadata, sub-4KiB IO, and backing IMDBs or faster large DBs/KVs than flash can achieve.
For Anton, I generally agree on your points – just calling out options for folks on the alias who may not be aware. I’m an HPE employee but no I wasn’t selling Superdome Flex, nor was I saying HPE offers or supports Smart Modular’s Kestral card. If there were a volume market for Kestral, I’d be interested in knowing what that is.
--Lance
Sounds good to me as far as it goes but why not use AppDirect mode?
On Fri, 17 Jun 2022 09:53:42 +0300, Anton Gavriliuk <antos...@gmail.com> wrote:
>Hi all
>
>I highly recommend the article -
>https://www.nextplatform.com/2022/06/16/meta-platforms-hacks-cxl-memory-tier-into-linux/
>
>I read it 3 times before I understood everything.
>
>In particular, the article answers the question about the latency of CXL
>attached DRAM DDR5, about ~250ns.
>Servers with Sapphire Rapids and CXL1.1 should be available next year.
>
>Current Optane PMEM generation offers ~300ns latency.
>
>Next Optane PMEM generation (300 series) might have lowered latency
>compared to the current 200 series.
>
>Therefore, for the next 1-3 years, in case you don't need more than 8 TB of
>volatile memory per 2S box and
>in terms of price/performance, instead of CXL attached DRAM, I would prefer
>PMEM in Memory mode.
>
>But what do you think ?
>
>Anton
>
>??, 13 ???. 2022 ?. ? 22:19, Evans, Lance <lance...@hpe.com>:
>
>> Per Steve’s comment (forked thread), I agree the market is small for huge
>> persistent memories at this point – due primarily to cost and ecosystem
>> enablement (kinda like the early days of flash). In short, I see it handy
>> for metadata, sub-4KiB IO, and backing IMDBs or faster large DBs/KVs than
>> flash can achieve.
>>
>>
>>
>> For Anton, I generally agree on your points – just calling out options for
>> folks on the alias who may not be aware. I’m an HPE employee but no I
>> wasn’t selling Superdome Flex, nor was I saying HPE offers or supports
>> Smart Modular’s Kestral card. If there were a volume market for Kestral,
>> I’d be interested in knowing what that is.
>>
>>
>>
>> --Lance
>>
>>
>>
>> *From: *Anton Gavriliuk <antos...@gmail.com>
>> *Date: *Monday, June 13, 2022 at 12:35 PM
>> *To: *"Evans, Lance" <lance...@hpe.com>
>> *Cc: *Amnon Izhar <aiz...@gmail.com>, Daniel Waddington <
>> waddy...@gmail.com>, Steve <st...@steveheller.org>, pmem <
>> pm...@googlegroups.com>
>> *Subject: *Re: CXL
>>
>>
>>
>> > Intel memory population rules do not allow more than one DCPMM module
>> on each memory controller. So max 6 per socket on Cascade Lake and 8 per
>> socket on Ice Lake CPUs. Dual-socket systems thus max at 8TiB of
>> persistent memory. I have not heard any Intel plans to release Optane DIMM
>> capacities above 512GiB.
>>
>>
>>
>> Correct, idea with local PMEM and CXL attached DRAM, requires possibility
>> to install PMEM in all local memory slots. Currently only 50% local mem
>> slots are available for pmem. It could be changed with Optane 300 and next
>> gen servers....
>>
>> We have to ask Intel directly ?
>>
>>
>>
>> > Of course there are Optane-based block NVMe SSDs up to 3.2TB
>>
>>
>>
>> This is a fast disk, but too slow to be memory.... Fastest I/O means NO
>> I/O, load/store/flush/persist/... with pmem instead.
>>
>>
>>
>> > HPE’s Superdome Flex supports 6 Optane DIMMs per socket, up to 16
>> sockets, so max 48TiB within one OS footprint using 512GiB DCPMM modules
>> (plus whatever DRAM is configured)
>>
>>
>>
>> That is nice, but the idea was more Optane pmem without a tremendous
>> number of CPU cores... and Db/App expensive licenses for that number of CPU
>> cores.
>>
>>
>>
>> > Also Smart Modular recently released their Kestrel PCIe 4.0 card with
>> embedded FPGA that supports up to 2TB of Optane in memory mode on one PCIe
>> slot. I don’t know how many of these can be used in a server at once, or
>> how their capacity might be additive to DIMM-based modules:
>>
>>
>>
>> Does HPE support this ??
>>
>> Since it uses PCIe4 in the data path, latency should be much higher than
>> average pmem latency ~350ns
>>
>> Do you have any numbers ?
>>
>>
>>
>> Anton
>>
>>
>>
>> ??, 13 ???. 2022 ?. ? 20:19, Evans, Lance <lance...@hpe.com>:
>>
>> FYI,
>>
>>
>>
>> Regarding the capacity question, not CXL attach…
>>
>>
>>
>> Intel memory population rules do not allow more than one DCPMM module on
>> each memory controller. So max 6 per socket on Cascade Lake and 8 per
>> socket on Ice Lake CPUs. Dual-socket systems thus max at 8TiB of
>> persistent memory. I have not heard any Intel plans to release Optane DIMM
>> capacities above 512GiB. Of course there are Optane-based block NVMe SSDs
>> up to 3.2TB:
>>
>>
>>
>>
>> https://www.intel.com/content/www/us/en/products/details/memory-storage/data-center-ssds/optane-dc-ssd-series.html
>>
>>
>>
>> HPE’s Superdome Flex supports 6 Optane DIMMs per socket, up to 16 sockets,
>> so max 48TiB within one OS footprint using 512GiB DCPMM modules (plus
>> whatever DRAM is configured):
>>
>>
>>
>>
>> https://www.hpe.com/us/en/servers/superdome.html?jumpid=ps_2tuqx4jvma_aid-520061464&ef_id=EAIaIQobChMImqeyxO2q-AIVIQ_nCh3NMQSNEAAYASAAEgL1-_D_BwE:G:s&s_kwcid=AL!13472!3!523569994755!e!!g!!hpe%20superdome%20flex!13236197024!123093435296&
>>
>>
>>
>> Also Smart Modular recently released their Kestrel PCIe 4.0 card with
>> embedded FPGA that supports up to 2TB of Optane in memory mode on one PCIe
>> slot. I don’t know how many of these can be used in a server at once, or
>> how their capacity might be additive to DIMM-based modules:
>>
>>
>>
>> https://www.smartm.com/product/advanced-memory/AIC
>>
>>
>>
>> --Lance
>>
>>
>>
>> *From: *<pm...@googlegroups.com> on behalf of Anton Gavriliuk <
>> antos...@gmail.com>
>> *Date: *Saturday, June 11, 2022 at 10:54 AM
>> *To: *Amnon Izhar <aiz...@gmail.com>
>> *Cc: *Daniel Waddington <waddy...@gmail.com>, Steve <
>> st...@steveheller.org>, pmem <pm...@googlegroups.com>
>> *Subject: *Re: CXL
>>
>>
>>
>> > It should be the other way around. DRAM connected to CPU on the DDR
>> bus, PMEM connected over CXL.
>>
>>
>>
>> Yes, generally it should be the other way around. But that "other way
>> around" requires CPU with support CXL2.0 so we could expect that in
>> 2025.......
>>
>>
>>
>> Anton
>>
>>
>>
>> ??, 11 ???. 2022 ?. ? 19:47, Amnon Izhar <aiz...@gmail.com>:
>>
>> It should be the other way around. DRAM connected to CPU on the DDR bus,
>> PMEM connected over CXL.
>>
>>
>>
>>
>>
>> On Sat, Jun 11, 2022 at 7:13 PM Anton Gavriliuk <antos...@gmail.com>
>> wrote:
>>
>> Hi all
>>
>> One of most frequent questions about PMEM I have heard from our customers
>> -
>>
>> when PMEM becomes much bigger ?
>>
>> Direct way to increase DCPMM sizes. But even without that we could try to
>> increase PMEM size per server using
>> CXL1.1 & PCIe5
>>
>> Please let me explain the idea -
>>
>> Presently for 2 sockets box we could get up to 8 TB PMEM using 50% of
>> memory slots.
>> But what if we could use 100% memory slots on the system motherboard, we
>> could get 16 TB PMEM with current
>> DCPMM sizes. In this case DRAM could be attached using CXL1.1 & PCIe5.
>>
>> What do you think ?, does it make sense ?
>>
>> Anton
>>
>>
>>
>> ??, 10 ???. 2022 ?. ? 20:36, steve <st...@steveheller.org>:
>> .
>> >> ------------
>> >> Steve Heller
>> ------------
>> Steve Heller
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "pmem" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to pmem+uns...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/pmem/CAAiJnjppnEs5ADBQ8YBf5JO3RzWQtS0R%2BtcjL__nQqcbhKSbjw%40mail.gmail.com
>> .
>>
>> --
>>
>> Sent from Gmail Mobile
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "pmem" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to pmem+uns...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/pmem/CAAiJnjoXC0r8RjrmmM0Qi_BeW1fa2mEJrF_2fG_SEFkTZEdHFQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/pmem/CAAiJnjoXC0r8RjrmmM0Qi_Be...@mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>>
------------
Steve Heller