Hi Oleg,
Receiving 100/s you can have 8.640.000 events a day, and depending on the event_type cardinality, aggregating it all at query time can be a problem.
An solution is to use a bigger resolution(like 1min).
An way to do it is using two temporarily counters for each event_type, an for the number of events on that period of time, other for the sum of the event values.
Then each minute you dump the value of each in two sorted set, again, one with counter, other with the sum.
The timestamp would be the sorted-set score of both zset, and the value(counter/sum) as the member.
Then you can easily query an time window using both sorted lists, and they would be much smaller and faster to retrieve.
This way the number of event per second isn`t a problem at query time.
With sorted-set you can easily discard old data too.
* You can probably only use the sorted-set(avoiding the temporally counters) using zincrby, but its time complexity is higher(O(log(n)) vs O(1)).
--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/groups/opt_out.
Oleg.