Hi Will,
I am running some new benchmarks to see the memory usage on larger datasets. To some extent though the memory will depend on your samples. If you have a high diversity sample it may take the full number of interations to merge on a solution. This can exponentially increase the memory. You can try reducing the Max iterations and Convergence iterations and that will help a little.
If your working on a shared server it may also be that other users are also requesting alot of memory and its canceling out you run.
The memory is an ongoing issue that I'm working on trying to get around. For us going to about 100,000 contigs from the surface ocean ends in the memory being ~250-300 gb to run.
Feel free to let me know if you want any help playing with the best preferences for your sample type. Particularly with Binsanity-lc I have played around a lot with what works best while maintaining as much accuracy as possible.
-Elaina