No, it means the minimum number of nodes (internal or bucket) are loaded to retrieve a given key or key range.
The primary use case of BTrees is storing very large mappings where you usually only want to access very small subsets of keys.
There's a feature of our implementation though that might be useful for time series data, depending on the application. I mentioned that BTrees are trees of nodes with data in buckets at the leaves of the trees. But BTrees are also linked lists of buckets. Each internal node has a reference to it's first bucket and each bucket has a reference to the next bucket in the tree. This is to allow efficient iteration.
In the example above, to iterate over the entire tree, we only had to load the top node and the buckets. We didn't have to load or traverse internal nodes.
As always, picking and tuning data structures depends on application usage (update and access) patterns, but I can imagine time-series applications for which BTrees could work very well, especially with some compile-time tuning (much larger bucket sizes and different storage types).
Early in my career I worked a lot with hydrologic time series data that was collected over short fixed time intervals (5 or 15 minutes). This data was very expensive to store at the time (early eighties, when a 300MB disk was the size of a dishwasher and wildly more expensive). Storage of the data was wildly important and the nature of this data allowed it to be very compressible, typically 90-95%, which was an important factor in it's storage.
(I could imagine a variation of BTrees for fixed-time-interval data that avoided storing time values in buckets.)
Of course, storage is much cheaper now, but we store a lot more data. The same factors that made a custom storage format for time-series data attractive in the 80s, makes columnar formats like Parquet and ORC popular today.
(Hm, writing that made me wonder if it would be useful to have a BTree variant that used Numpy/Arrow/xnd arrays to store bucket data. Perhaps memory-mapped...Hm....)
Jim