dbsr...@git.shadowcat.co.uk:DBM-Deep.git
g...@github.com:robkinyon/dbm-deep.git
The shadowcat repos will be considered authoritative and should be
preferred. Go ahead and clone it, then submit patches to me. If you
must, use the github. If we can get along without it, I'd prefer to
decommission it, but in the spirit of git, I'm not going to be fussy.
I have to add the tags for 1.0009-1.0013 (they got lost), but that
shoudl be all that's missing. Also, all the branches are on the SC
repos, but may not be in the github repos. (I'm still figuring this
git thing out.)
--
Thanks,
Rob Kinyon
Q: Why the preference againsed github?
--
Steven Lembark 85-09 90th St.
Workhorse Computing Woodhaven, NY, 11421
lem...@wrkhors.com +1 888 359 3508
Because if github goes down, that spikes the process. SC won't go down
without providing what I need to export.
Rob
Q: How well would the current DBM::Deep handle a
hash with roughly 160_000 keys?
"I have no idea, noone has been idiotic enough to
try it" would be a reasonable response.
Thinking you're tied to one central repo seems to miss the point of
distributed anyway. Simply keeping it up-to-date on github is enough.
It's a complete repo and you can interact with any repo just the same
as any other.
I'll choose github, others may choose SC (whatever that is) and I'm
sure a clone will turn up on gitorioius and repo.or.cz ...
--
If riding in an airplane is flying, then riding in a boat is swimming.
113 jumps, 46.6 minutes of freefall, 89.1 freefall miles.
My hashes fit easily into core, it's just
that persisting the data via flat files can
become difficult with a single directory
containing 160Kfiles.
My data structure is also fairly flat: using
$hash{ $namespace, $key } leaves me with many
keys and smaller data.
Q: How large are the chunks of data you fetch
during the walking cycle?
Knowing might help me tune things a bit.
Thanks for the information.