Hi Adam,
The lower you put the defrag.limit, the more frequent and short
defrags you'll get.
Given your amount of data, I'd suggest trying with a limit in between
5 and 10, and see if your use case can cope with that.
That said, if your dataset keeps growing, and you either do a lot of
manual deletes or have several metrics with sampling configured,
you're obviously right: whatever defrag limit you set, you're sooner
or later (let's say at approximately 100GB in your case) bound to long
wait times.
I'll see if I can find a way to improve on this area, but the obvious
choice of changing datastore is not really an option as of now, as
moving to a distributed/scalable store would make history and
aggregate queries difficult (if possible at all).
Another option would be to employ different Nimrod instances (hence
different databases): would this work for you?
> --
> You received this message because you are subscribed to the Google Groups
> "nimrod-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
nimrod-user...@googlegroups.com.
> For more options, visit
https://groups.google.com/groups/opt_out.
>
>
--
Sergio Bossa
http://www.linkedin.com/in/sergiob