Hello Steve,
consists your "beegfs system" out of only 1 server now ?
So that's complete useless for usage as you are than better with an nfs server setup with ipoib which give you >11GB/s on a 100Gb ib card and ib switch then to the clients.
A 1 server
"beegfs system" is just for take a look inside of installation and configuration, a
"beegfs system" for usage begin at least with the minimum of 2 servers.
When you configure your system properly (eg vfs_cache_pressure=30) there won't be any disk read access at all because it would be full answered from (eg xfs) filesystem cache
(which alongside doesn't work with zfs because that looks to kernel like a database application with it's own (2-3x slower) arc cache for meta read requests).
So in meta write situation which application would create millions of that requests .. would be terrible written if creating that number of files per second - so read is meta priority which is solved by ram access.
If you slow down in meta operations than you probably ran out of your ib network capacity and you should taking further beegfs node with 1 meta and 1 data service instead and not going to a dedicated meta server.
Even 1 meta service (MDS) can just have exactly 1 meta target (MDT) which destroy your dream of scaling targets without even scaling meta services also.
As with a distributed system in general your MTBF for it goes down drastically as the number of involved hw parts and services go up (think about 1000x worse than 1 nfs fileserver instead).
So as rule of thumb you should design a distributed system as scaleable and easy as any possible at all and be prepared for coming unexpected downtimes.
You could begin with 2 servers, on the first have 1 mgmt, 1 meta and 1 storage service, on the second 1 meta and 1 storage service.
When you go up think about a separate mgmt server which has exactly same hardware as the servers for meta and storage services to be able to exchange a failing node shortly with the mgmt one
(or if mgmt node fail switch that service to first meta/storage node). If you have lots of TB to PB inside a beegfs you won't like to have a long downtime in case of any failure.
That again could be optimized with external raid storage systems connected to minimum 2 servers (online move volumes between).
Be aware if you look into zfs backend because meta storage usage is really bad, meta arc access slow and for the data streaming you need even double disks for same bandwidth as hw-raid xfs,
and last but not least zfs get easily in import error after kernel crash and power outage which is a lotto if a full pool restore is required after (where's your backup here ?) ...
So mention the dreamfull features of zfs with reality carefully as it's even the easierst to build just 2 server setups complete without beegfs before and compare meta+streaming performance, pulling power plug while writing and look which filesystem is there after.
That's enough without going into config details while even no hw detail available.