Redis PUB/SUB replay

916 views
Skip to first unread message

hitechnical

unread,
Jun 7, 2011, 2:18:06 AM6/7/11
to Redis DB
I am not sure whether this is being asked already. This question is
related to PUB/SUB model of Redis.

Let's say, if Redis received a message from client and before it sents
to all subscribers it crashed. When it comes back, will it replay the
message again ? If not, what are all ways to handle it in Redis.
Because, client does not have any intelligence that published message
is sent to all subscribers without fail.

Josiah Carlson

unread,
Jun 7, 2011, 2:54:40 AM6/7/11
to redi...@googlegroups.com

If Redis manages to crash between receiving a publish notification and
sending it to the clients, it won't get sent. Redis does not persist
publish/subscribe. It is not written to AOF, it is not written during
dump, etc.

It would seem that you are looking for a guaranteed "message X was
sent to all clients at least once". First, let's fix your
expectations. Any system that manages to persist 100% of all messages,
and can also guarantee 100% that any client has received a message at
least once, can handle at most 100 messages/second (each message must
be written to disk, so at 10ms per write, that gets you 100
messages/second). Perhaps 300 per second with fast hardware, maybe a
bit more with SSDs. That's probably going to be too slow, so my first
recommendation, if you want a solid 99% solution using Redis: use AOF
every second, use periodic AOF rewriting, and don't use
publish/subscribe.

Because Redis doesn't persist publish/subscribe you have options for
how you should "send" messages. My recommendation: zsets.

To publish a message:

def zset_publish(conn, channel, message):
id = conn.incr(COUNTER)
conn.zadd(channel, id, message)

To subscribe to a channel:

def zset_subscribe(conn, channel, backlog=0):
me = uuid()
last = conn.get(COUNTER) - backlog
sleeptime = .002
while not QUIT:
cur = conn.get(COUNTER)
while last < cur:
sleeptime = .001
last += 1
item = (conn.zrangebyscore(channel, last, last) or [None])[0]
if item:
# process the item
if sleeptime != .001:
# only sleep if we didn't have anything to process
time.sleep(sleeptime)
sleeptime = min(sleeptime * 2, .25)

And call this periodically to clean out old messages:

def zset_clean(conn, channel, backlog=100):
minimum = conn.get(COUNTER) - backlog
conn.zremrangebyscore(channel, '-inf', minimum)


There is a race condition where someone could increment the counter
and have the clients try to read before the message gets added to the
channel, but that can be fixed if it matters.

Regards,
- Josiah

hitechnical

unread,
Jun 7, 2011, 4:38:49 AM6/7/11
to Redis DB
Thanks. I know it is possible to setup slaves around Redis. If this
works with Pub/Sub, then in case of master failure, can't slave passes
the message to client? Is this correct assumption? I see ZSET approach
as a very nice workaround, if slave approach fails in this case.

Is there any plan of persisting pub/sub messages in future?

On Jun 7, 11:54 am, Josiah Carlson <josiah.carl...@gmail.com> wrote:

Josiah Carlson

unread,
Jun 7, 2011, 11:50:37 AM6/7/11
to redi...@googlegroups.com
On Tue, Jun 7, 2011 at 1:38 AM, hitechnical <vad...@gmail.com> wrote:
> Thanks. I know it is possible to setup slaves around Redis. If this
> works with Pub/Sub, then in case of master failure, can't slave passes
> the message to client? Is this correct assumption? I see ZSET approach

It can pass messages to a client if the slave had received the
message, or if you start publishing messages to the client.

> as a very nice workaround, if slave approach fails in this case.
>
> Is there any plan of persisting pub/sub messages in future?

Not as far as I know.

Regards,
- Josiah

> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>

hitechnical

unread,
Jun 8, 2011, 1:40:47 PM6/8/11
to Redis DB
Thanks Josiah, I am trying your solution of using SortedSets. I am
wondering it would be easy to use lpush/rpop instead of sorted set.
Ideally, I need fifo approach to process the messages. Do you see any
caveats in using lists over sortedsets? Thanks for your great tips.

On Jun 7, 8:50 pm, Josiah Carlson <josiah.carl...@gmail.com> wrote:

Josiah Carlson

unread,
Jun 8, 2011, 3:02:21 PM6/8/11
to redi...@googlegroups.com
On Wed, Jun 8, 2011 at 10:40 AM, hitechnical <vad...@gmail.com> wrote:
> Thanks Josiah, I am trying your solution of using SortedSets. I am
> wondering it would be easy to use lpush/rpop instead of sorted set.
> Ideally, I need fifo approach to process the messages. Do you see any
> caveats in using lists over sortedsets? Thanks for your great tips.

Sorted sets let you have multiple readers at the same time. One reader
"consuming" an item doesn't stop other readers from "consuming" that
same item. Further, subscribers can be on slave redis instances, etc.

If you wanted to use a list, then everyone would need to be lpush/rpop
on a single Redis, and one reader consuming an item would stop anyone
else from consuming that same item. Very useful for a task queue, not
useful at all for publish/subscribe where you may have more than one
subscriber to a channel.

Reply all
Reply to author
Forward
0 new messages