If Redis manages to crash between receiving a publish notification and
sending it to the clients, it won't get sent. Redis does not persist
publish/subscribe. It is not written to AOF, it is not written during
dump, etc.
It would seem that you are looking for a guaranteed "message X was
sent to all clients at least once". First, let's fix your
expectations. Any system that manages to persist 100% of all messages,
and can also guarantee 100% that any client has received a message at
least once, can handle at most 100 messages/second (each message must
be written to disk, so at 10ms per write, that gets you 100
messages/second). Perhaps 300 per second with fast hardware, maybe a
bit more with SSDs. That's probably going to be too slow, so my first
recommendation, if you want a solid 99% solution using Redis: use AOF
every second, use periodic AOF rewriting, and don't use
publish/subscribe.
Because Redis doesn't persist publish/subscribe you have options for
how you should "send" messages. My recommendation: zsets.
To publish a message:
def zset_publish(conn, channel, message):
id = conn.incr(COUNTER)
conn.zadd(channel, id, message)
To subscribe to a channel:
def zset_subscribe(conn, channel, backlog=0):
me = uuid()
last = conn.get(COUNTER) - backlog
sleeptime = .002
while not QUIT:
cur = conn.get(COUNTER)
while last < cur:
sleeptime = .001
last += 1
item = (conn.zrangebyscore(channel, last, last) or [None])[0]
if item:
# process the item
if sleeptime != .001:
# only sleep if we didn't have anything to process
time.sleep(sleeptime)
sleeptime = min(sleeptime * 2, .25)
And call this periodically to clean out old messages:
def zset_clean(conn, channel, backlog=100):
minimum = conn.get(COUNTER) - backlog
conn.zremrangebyscore(channel, '-inf', minimum)
There is a race condition where someone could increment the counter
and have the clients try to read before the message gets added to the
channel, but that can be fixed if it matters.
Regards,
- Josiah
It can pass messages to a client if the slave had received the
message, or if you start publishing messages to the client.
> as a very nice workaround, if slave approach fails in this case.
>
> Is there any plan of persisting pub/sub messages in future?
Not as far as I know.
Regards,
- Josiah
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
Sorted sets let you have multiple readers at the same time. One reader
"consuming" an item doesn't stop other readers from "consuming" that
same item. Further, subscribers can be on slave redis instances, etc.
If you wanted to use a list, then everyone would need to be lpush/rpop
on a single Redis, and one reader consuming an item would stop anyone
else from consuming that same item. Very useful for a task queue, not
useful at all for publish/subscribe where you may have more than one
subscriber to a channel.