Feel free to forward my answer to rdiff-backup mailing list.
I will try to deconstruct your message in small pieces to answer your question.
Obviously, transferring 25TiB might take a while even if the servers were using the 1 Gbps network interface at full speed, which is not happening with rdiff-backup. There is always too much latency waiting for the IO.
To have a better understanding of what is happening, you might want to run "strace -p <pid>". It's a Linux utility to inspect what the program is doing at the kernel level. You should see file read and file writes.
> Both servers are Unraid,
with spinning disks formatted with XFS. What I currently have
configured is rdiff-backup running in a docker on the target server,
with the source server's disk exposed via NFS and volume-mapped into the
container. I have started spinning up a new container to put on the
source server with ssh and rdiff-backup. Do the veterans here think
that rdiff-backup to rdiff-backup over ssh will provide better
performance than the NFS mounts? I should be able to arrange it so that
the relative paths stay the same, so I don't think I'll need to do
another initial backup if I switch to ssh.
My experience shows it's better to backup through SSH than using an NFS mount point. It's faster and more stable.