I have a Mellanox 40 gbit network card running in ethernet mode that's been installed in my Unraid server for some time. Before I installed Unraid, I was running Ubuntu server and able to get most of the speed out of my 40 gbit card. After switching to Unraid the speed dropped to around 4 gbits. I tried fixing the problem but eventually gave up due to other things I needed to take care of. After making some upgrades to my server I've decided to try and resolve this issue again so I can make full use of the high speed drives in my server.
Unraid version is currently 6.11.5, but this problem has existed for the last few versions. Basically, I have no idea where to start diagnosing the issue. I do recall I was able to get 10 gbits out of the card during my attempts to resolve the issue in the past, but I really can't recall what I may have done to get that to happen.
One of the steps I've recently taken was to disable flow control on my desktop, 40 gbit switch, and server. I went ahead and reenabled it on everything again and I can't seem to see any difference in it being on or off. I've found that file transfers run faster with jumbo frames enabled, though.
Jumbo Frames usually do not do anything "better" today. They were introduced with 1Gbe LANs, and made sense then because they took off the higher cpu load because there were fewer packets to calculate.
Nic offload is usually perfectly ok. The TipsAndTweaks only allows it to be turned off, because some very cheap and stupid cards/drivers messed it up. But for intel or mellanox they are fine. And for 40G you need them desperately. Else your CPU will be glowing in the dark.
In case you have made a fast NVMe your "cache" drive (since today renamed to "primary storage"), you should be able to see write speeds for a certain amount of data (as long as the cache on the disk can keep up) in the range >3000Mb/s (maybe almost 4000Mb).
That's a very good point, but it seems to be that the network interface is the bottleneck. To test I have a 2 tb NVME that's it's own share that's speed limited to about about 370 MB/s, a RAM disk I've added to the server that's limited to about 370 MB/s, and the LibreSpeed docker can't get higher than about 4000 Mbps for download and 1867 Mbps for upload.
If it helps: The server is a 22 core Xeon and it has 64 GB of RAM. I have several 12 tb SAS disks in the primary array and my cache array is made up of four SSDs with DRAM in RAID 1. I am able to achieve excellent speeds internally. Recently I was able to get over 1.8 GB/s out of the array while testing. VMs seem to be limited to 10 gbit which I believe is a limitation of the network driver.
d3342ee215