To add on to what Mike said, these are passive (rather than active) copper 400Gb <-> 4x100Gb breakout cables. As such, the node-end is 50GbE-based using PAM4 modulation (where the switch-end is 8x50GbE PAM4), and thus doesn't require the gearboxes used for 25GbE <-> 50GbE lane conversion and PAM4 <-> NRZ remodulation found in some other 400GbE <-> 4x100GbE breakout cables. The switch interfaces claim they've negotiated for rs544 FEC, and there seem to be some decoder latency values published here:
https://www.signalintegrityjournal.com/articles/3405-200-gbps-ethernet-forward-error-correction-fec-analysis . I've seen a number of discussions of rs544 FEC latency from a quick search, but the latency values discussed (30-80ns) are nowhere near the 50us you're talking about. Info on the cables is rather scant online. The best I've been able to find is a datasheet from
fs.com, but I don't think that provides the info you're looking for:
https://resource.fs.com/mall/doc/20240428122423cdk25i.pdf . As far as the switch goes, it claims "sub-850ns" latency, and I don't think that this switch is passing enough traffic to where queueing issues would start to show up. Might it be an issue with the NIC? We haven't had a whole lot of experience with the Intel E810 NICs before.