You can have all workers BLPOP from the same list, safely.
It's unlikely that the popping of the items itself is a bottleneck for
you, so I'd suggest using this approach.
Generally yes, but it depends on the queue items. If the queue items
are events that are to be aggregated, then it's not unreasonable for
reading processes to be able to consume at a rate significantly higher
than BLPOP can return items, in particular with the network
request/response latency (I've consumed items at a rate of 1-2 million
per second per client in the past, which for BLPOP would be
impossible).
To answer the OP's question:
MULTI
LRANGE key 0 99
LTRIM key 100 -1
EXEC
That will pull the first 100 items from the list, then delete them.
If you are producing those entries in bigger chunks, you could
pre-chunk them into modestly sized groups (I have used chunk sizes of
25-1000, depending on the data type), then store them in some sort of
packed representation (I usually use JSON). You can then use BLPOP to
get your pre-defined chunks. By doing this, you reduce the commands
being executed on the Redis side of things, minimize the number of
buffers that need to be copied, etc.
Regards,
- Josiah
Localhost in EC2. Across the network in EC2 we were seeing closer to
200k per second per client. That was using the pre-chunked data at a
size of 100 items per chunk, each item being roughly 10 bytes each. I
would actually perform pipelined non-blocking list pops (no
multi/exec), then filter out the null entries.
>> if you are producing those entries in bigger chunks, you could
>> pre-chunk them into modestly sized groups (I have used chunk sizes of
>> 25-1000, depending on the data type), then store them in some sort of
>> packed representation
> Why there is a need for an intermediary step ? Can't the elements be
> processed directly by the consumer without aggregating into a separate
> representation ?
For our needs, no. There are a few ways that we could have made them
even more efficient (10 bytes to 4 bytes by using native packed
integers instead of json-encoded lists of ints, bigger chunks, faster
decoding, etc.), but the point where this stuff was no longer a
bottleneck, we moved on to the stuff that was the bottleneck.
Regards,
- Josiah
With scripting you can hack it where you LRANGE, RPUSH, LTRIM, inside
Redis using LUA, but don't use any blocking operations as those won't
work.
Regards,
- Josiah
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/redis-db/-/zWPCoXZE-18J.
>
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.