Thank you in advance
Ik
It might be best if you ran your own experiments.
One approach is to split very large data sets into separate @file
nodes. Leo has a good file caching system, so doing so will speed up
the loading of the actual .leo file dramatically. This is the approach
taken with leoPy.leo, for example.
Edward
You can also use @url nodes pointing at other Leo files to make handy
links without loading the files every time. And leo.external.leosax
gives fast read only access to raw tree info. from Leo files without
fully processing them too.
Cheers -Terry