Hi Glen,
I apologize if this is a duplicate. I thought I sent a reply but it's not showing up. I'm new to Google Groups so I probably just hit the wrong button somewhere.
The way I was describing it, you would not be able to retrieve the connections in descending weight order without an iteration of the entire set. I'm guessing you would only be interested in the top m items, so you would likely want to use a heap for this, rather than retrieving all in memory and sorting. Besides using less memory, this would give you O(n log m) performance, rather than O(n log n). Let me know if you need more details here.
I was thinking of storing the weights as variable-byte integers, so you'd likely want to normalize the weights yourself, depending on the resolution you needed. Because of the encoding, you would get the most efficient usage of the available bits by normalizing between 0 and 127, or 0 and 16383, (or 0 and ((2^(7n)) - 1)).
If you know you'll mostly be retrieving this data in descending weight order, you may want to make a custom modification to the library which, at compression time, sorts the normalized connections by weights descending, then uses a "reverse" delta compression on the weights (when reading you will subtract the deltas instead of adding). This way, you would at least get the benefit of gap compression on the weights, but not the connected ordinals, which is the inverse of what I was originally suggesting.
Of course, if you use the hash format, you don't get gap compression anyways and won't be able to retrieve the results in any kind of sorted order, but you would get to determine what the weight of any given connection is in constant time.
Drew.