Regressions in 2.6 when out of memory & writing during multi-exec?

Showing 1-24 of 24 messages
Regressions in 2.6 when out of memory & writing during multi-exec? Martyn Loughran 11/15/12 9:56 AM
While upgrading from 2.4 to 2.6 I noticed some changes which I haven't seen documented, and look like regressions to me.

Scenario: out of memory & writing during multi-exec

# Change 1 - exec succeeds even if one of commands was an error

On redis 2.4 (tested 2.4.17)

    redis 127.0.0.1:6379> config get maxmemory
    1) "maxmemory"
    2) "0"
    redis 127.0.0.1:6379> multi
    OK
    redis 127.0.0.1:6379> set foo yyy
    QUEUED

In another client set memory limit `config set maxmemory 1`

    redis 127.0.0.1:6379> set bar uuu
    (error) ERR command not allowed when used memory > 'maxmemory'
    redis 127.0.0.1:6379> exec
    (error) ERR command not allowed when used memory > 'maxmemory'
    
    Transaction still open, need to call discard

On redis 2.6 (tested 2.6.4)

    redis 127.0.0.1:6379> config get maxmemory
    1) "maxmemory"
    2) "0"
    redis 127.0.0.1:6379> multi
    OK
    redis 127.0.0.1:6379> set foo yyy
    QUEUED

In another client set memory limit `config set maxmemory 1`

    redis 127.0.0.1:6379> set bar uuu
    (error) OOM command not allowed when used memory > 'maxmemory'.
    redis 127.0.0.1:6379> exec
    1) OK
    redis 127.0.0.1:6379> get foo
    "yyy"
    redis 127.0.0.1:6379> get bar
    (nil)

This seems like a pretty serious issue. foo and bar were set during a transaction, but one one of the keys was written!

The only way to code around this is to check the response from each of the commands during a multi, and to only exec if there were no errors. This is problematic for all sorts of reasons. Previously it was easy to see whether the transaction succeeded by reading the exec reply.

The OOM error is also inconsistent with this line in the transaction docs "When a Redis connection is in the context of a MULTI request, all commands will reply with the string QUEUED unless they are syntactically incorrect".

# Change 2 - memory limit enforced during queuing rather than exec

This is related to the first change, but I just wanted to be explicit

On redis 2.4 (tested 2.4.17) the memory limit is enforced when exec is called

    redis 127.0.0.1:6379> multi
    OK
    redis 127.0.0.1:6379> set foo asdf
    QUEUED

In another client set memory limit `config set maxmemory 1`, then exec the transaction

    redis 127.0.0.1:6379> exec
    (error) ERR command not allowed when used memory > 'maxmemory'

As expected the set fails.

For redis 2.6 (tested 2.6.4), the set does not fail

    redis 127.0.0.1:6379> multi
    OK
    redis 127.0.0.1:6379> set foo asdf
    QUEUED

In another client set memory limit `config set maxmemory 1`, then exec the transaction

    redis 127.0.0.1:6379> exec
    1) OK

This feels like a bug to me.

Thanks very much for all your work on redis guys :)

Martyn

Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/15/12 10:00 AM
Thanks for reporting! Working on it right now, update ASAP.
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.



--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org

Beauty is more important in computing than anywhere else in technology
because software is so complicated. Beauty is the ultimate defence
against complexity.
       — David Gelernter
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/15/12 10:27 AM
First observation, the reality is that while 2.6 changed behavior and
made it worse, this was broken in Redis 2.4 as well.

1) If you set maxmemory to 1, then to 0 again during a transaction,
only a random number of commands will be committed.
2) If any event during a transaction frees some memory, only some
command can be committed.

Now "broken" is not probably the right term as actually Redis issues
an error when you try to queue the command, but well, it is not a
desirable behavior probably. However to skip the check client side is
still a risk because of syntax errors.

Now, how to fix this to have safety & simple behavior from the point
of view of client library implementations?

IMHO the best thing to do is:

1) If we detect an OOM error, syntax error, or any other error while
queueing a command (another error is that RDB dump is failing, and we
no longer accept writes), we flag the client internally and the EXEC
will always fail when called.
2) In such a case we return a generic error in EXEC, like: "Error
while queueing one or more command. EXEC aborted.".
3) I believe it is ways better if when this happens we automatically
DISCARD the transaction, for the sake of client simplicity.
4) A client should always read, once EXEC was called, the reply from
all the commands, and where QUEUE was expected but was not returned,
an error should be raised together with the EXEC error itself.

Does this makes sense?

Note that the alternative design where we simply always reply QUEUED
to commands and fail on EXEC is not possible as we dont' want to queue
an huge transaction while there is an OOM condition.

Cheers,
Salvatore

On Thu, Nov 15, 2012 at 6:56 PM, Martyn Loughran <mar...@pusher.com> wrote:
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/15/12 10:33 AM
Oh, well, related: do the auto DISCARD breaks some client? If so my
plan needs to be changed I guess, but don't automatically DISCARD is
simply broken IMHO.
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/15/12 10:40 AM
On Thu, Nov 15, 2012 at 7:26 PM, Salvatore Sanfilippo <ant...@gmail.com> wrote:
> Note that the alternative design where we simply always reply QUEUED
> to commands and fail on EXEC is not possible as we dont' want to queue
> an huge transaction while there is an OOM condition.

Actually, I'm not 100% sure this solution is to discard easily. It's
the best from the point of view fo API...

That is, we always return QUEUED but then EXEC fails. We never fail in
a different way...

But, look, since we flag the transaction to abort it, we don't need to
actually queue the command at all, thus, no memory will be used. This
looks like a possible alternative.

But otherwise I think that other clients may want this condition
signaled earlier... so if we return errors for every queued command
also, if there are errors, we allow the client to fail ASAP, if it is
not using pipelining at least.

It's not a trivial topic apparently.

Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Josiah Carlson 11/15/12 11:07 AM
On Thu, Nov 15, 2012 at 10:40 AM, Salvatore Sanfilippo
<ant...@gmail.com> wrote:
> On Thu, Nov 15, 2012 at 7:26 PM, Salvatore Sanfilippo <ant...@gmail.com> wrote:
>> Note that the alternative design where we simply always reply QUEUED
>> to commands and fail on EXEC is not possible as we dont' want to queue
>> an huge transaction while there is an OOM condition.
>
> Actually, I'm not 100% sure this solution is to discard easily. It's
> the best from the point of view fo API...
>
> That is, we always return QUEUED but then EXEC fails. We never fail in
> a different way...
>
> But, look, since we flag the transaction to abort it, we don't need to
> actually queue the command at all, thus, no memory will be used. This
> looks like a possible alternative.

This is the right answer.

> But otherwise I think that other clients may want this condition
> signaled earlier... so if we return errors for every queued command
> also, if there are errors, we allow the client to fail ASAP, if it is
> not using pipelining at least.

As a client user, I'd much prefer for the error to come up during EXEC
so I only have to trap the error in one spot.

> It's not a trivial topic apparently.

Maybe, but it's easy to ask what peoples' expectations are, and ask
whether that behavior is sane.

 - Josiah
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/15/12 11:21 AM
Ok I've an implementation to share with you:

https://github.com/antirez/redis/compare/safer-exec

This is the behavior in short:

1) EXEC fails if at least one command was not queued because of an error.
2) A generic error is always reported on EXEC.

Example:

redis 127.0.0.1:6379> multi
OK
redis 127.0.0.1:6379> set key1 value1
QUEUED

(config set maxmemory 1)

redis 127.0.0.1:6379> set key2 value2
(error) OOM command not allowed when used memory > 'maxmemory'.

(config set maxmemory 0)

redis 127.0.0.1:6379> set key3 value3
QUEUED
redis 127.0.0.1:6379> EXEC
(error) EXECABORT Transaction aborted, errors detected while queueing
commands.'.

I think that's the best bet for us IMHO, because centralizing the
error reporting only on EXEC requires accumulating the errors, instead
this way we get a generic error with EXEC, but when clients detect
this error (-EXECABORT) they should also report the other erros
accordingly.

Other clients not using pipelining may or may not report errors ASAP
as commands are queued.

Basically it's up to the client designer, but Redis tries to play
nicely at this point.

Only problem, the new behavior is to DISCARD on EXEC error, that makes
a lot of sense IMHO.

Client developers, please raise your hand if this breaks your client! Thanks.
This fix should be backported into 2.6 so it is important that it
works as expected with existing clients.

Cheers,
Salvatore

On Thu, Nov 15, 2012 at 6:56 PM, Martyn Loughran <mar...@pusher.com> wrote:
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.



Re: Regressions in 2.6 when out of memory & writing during multi-exec? Andy McCurdy 11/15/12 11:47 AM
This probably breaks redis-py, but it's not difficult to fix. I agree with Josiah however -- when I'm wearing my app developer hat (not as a client developer), I only want to see an error at EXEC time, not on all the individual commands that I'm queuing up. This already happens in redis-py as commands are buffered in the client until EXEC is called, but other clients may not be doing this.

I would like to raise one other issue related to this. From what I have seen, most users equate MULTI+EXEC to START TRANSACTION+COMMIT in a relational database. Specifically, they expect that if any command fails when EXEC is called that NONE of the commands in the multi+exec will affect the data inside Redis.

I receive bug reports and emails about this issue regularly, to which I respond with "this is how Redis works.." and point them to the section "Errors inside a transaction" section of (http://redis.io/topics/transactions). However, as a user, this is the behavior I expect and want when I think about transactions. And I believe it's what most other people expect and want.

Apologies for the tangent.

-andy
Re: Regressions in 2.6 when out of memory & writing during multi-exec? M. Edward (Ed) Borasky 11/15/12 12:00 PM
On Thu, Nov 15, 2012 at 11:47 AM, Andy McCurdy <sed...@gmail.com> wrote:
> This probably breaks redis-py, but it's not difficult to fix. I agree with
> Josiah however -- when I'm wearing my app developer hat (not as a client
> developer), I only want to see an error at EXEC time, not on all the
> individual commands that I'm queuing up. This already happens in redis-py as
> commands are buffered in the client until EXEC is called, but other clients
> may not be doing this.
>
> I would like to raise one other issue related to this. From what I have
> seen, most users equate MULTI+EXEC to START TRANSACTION+COMMIT in a
> relational database. Specifically, they expect that if any command fails
> when EXEC is called that NONE of the commands in the multi+exec will affect
> the data inside Redis.
>
> I receive bug reports and emails about this issue regularly, to which I
> respond with "this is how Redis works.." and point them to the section
> "Errors inside a transaction" section of
> (http://redis.io/topics/transactions). However, as a user, this is the
> behavior I expect and want when I think about transactions. And I believe
> it's what most other people expect and want.
>
> Apologies for the tangent.
>
> -andy

As a newcomer to Redis, I didn't expect "typical RDBMS transaction"
behavior from Redis. Given that Redis is only a few years old, I think
we're all "newcomers" and take the tool at face value.

--
Twitter: http://twitter.com/znmeb; Computational Journalism Publishers
Workbench: http://znmeb.github.com/Computational-Journalism-Publishers-Workbench/

How the Hell can the lion sleep with all those people singing "A weem
oh way!" at the top of their lungs?
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Martyn Loughran 11/16/12 4:54 AM
Thanks for dealing with this so quickly Salvatore!

I completely agree with your monologue last night, seems very sensible :) Specifically having the exec return an error seems correct, and makes clients easy to write. I don't see any problems with discarding the transaction; presumably clients which abstract transactions currently have no option but to discard after a failed exec.

I've tried the branch and it works perfectly.

Thanks again,
Martyn
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 6:49 AM
On Thu, Nov 15, 2012 at 8:47 PM, Andy McCurdy <sed...@gmail.com> wrote:
> This probably breaks redis-py, but it's not difficult to fix. I agree with
> Josiah however -- when I'm wearing my app developer hat (not as a client
> developer), I only want to see an error at EXEC time, not on all the
> individual commands that I'm queuing up. This already happens in redis-py as
> commands are buffered in the client until EXEC is called, but other clients
> may not be doing this.

I don't like to break redis-py at all... I think this will force me to
just operate a minor design change. Basically when EXEC fails because
it was dirty due to previous errors it will reply with an error like:

-EXECABORT Transaction aborted due to previous errors, please use DISCARD.

About EXEC time error, yes it is a good idea and it is what my branch is doing.
What is a bad idea IMHO is to reply to QUEUED to every command even in
the presence of errors.
Why it is a bad idea? Because it leaves us with two alternatives that
are both bad:

1) Just reply with a generic error at EXEC time.
2) Accumulate errors and reply to all the error at EXEC times, but
Redis errors are single lines.

There is nothing wrong in providing the detailed error ASAP as it
happens, but still reply with a final generic error at EXEC time.

A client should act like this:

1) Send the commands.
2) Get get the N+1 replies (N queued + 1 for EXEC).
3) Check only the last reply, if it's ok we don't care and return a
success code to the caller.
4) If the last reply is -EXECABORT or any other error, parse the rest
of the replies and also provide detailed informations about all the
errors encountered.

This seems really sane to me, and offers other clients that want to
behave differently the ability to also terminate the transaction ASAP
if needed.

> I would like to raise one other issue related to this. From what I have
> seen, most users equate MULTI+EXEC to START TRANSACTION+COMMIT in a
> relational database. Specifically, they expect that if any command fails
> when EXEC is called that NONE of the commands in the multi+exec will affect
> the data inside Redis.
>
> I receive bug reports and emails about this issue regularly, to which I
> respond with "this is how Redis works.." and point them to the section
> "Errors inside a transaction" section of
> (http://redis.io/topics/transactions). However, as a user, this is the
> behavior I expect and want when I think about transactions. And I believe
> it's what most other people expect and want.

It is inevitable that when people hear "transaction" will think
SQL-wise, but this is not something that should force us to add into
Redis rollbacks that are complex and useless in the case of Redis as
commands can fail only for type mismatch and other erros like this.

if you have a type mismatch, there is an error in the *logic* of your
app. It's like calling SADD instead of SREM. No body can save those
users from application mistakes, only debugging. To add a complex
mechanism just to save them during development and just for a subset
of cases is not the way to go IMHO.

> Apologies for the tangent

Offtopic is my live stile :-) And btw your comment seems very on-topic to me.
Thanks for discussion another point, we are here to chat about Redis!

Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 8:42 AM
Thank you,

I updated the branch now:

https://github.com/antirez/redis/compare/safer-exec

Basically the only change is that.

1) A different error message is returned, providing an hint about the
need to call DISCARD.
2) The MULTI state is not cleared automatically.

This should be enough to retain full backward compatibility.

Andy, please could you ACK that Redis-py is now ok with the change
without modifications? Thank you.

Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Andy McCurdy 11/16/12 10:08 AM

On Nov 16, 2012, at 8:42 AM, Salvatore Sanfilippo <ant...@gmail.com> wrote:

> Thank you,
>
> I updated the branch now:
>
> https://github.com/antirez/redis/compare/safer-exec
>
> Basically the only change is that.
>
> 1) A different error message is returned, providing an hint about the
> need to call DISCARD.
> 2) The MULTI state is not cleared automatically.
>
> This should be enough to retain full backward compatibility.
>
> Andy, please could you ACK that Redis-py is now ok with the change
> without modifications? Thank you.
>

Two of my tests are failing with the safer-exec branch, both when testing against errors occurring when queuing a command. Trying to come up with a way to maintain backwards compatibility, though I'm not confident in being able to do that. Perviously, users had the option of raising on the first error seen or returning the response from EXEC, which is a list that includes exception instances for the commands that failed.

-andy

Re: Regressions in 2.6 when out of memory & writing during multi-exec? Andy McCurdy 11/16/12 11:40 AM
redis-py 2.7.2 released, compatible with the safer-exec branch. Slightly backwards incompatible, but I'm of the opinion that it's more correct.

-andy
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 1:22 PM
Hi Andy, thanks for your work!

I wonder one thing about the two tests that were failing.
Basically I think the only thing that could cause a test failing is
that you were testing something like:

MULTI
command with syntax error
some good command
EXEC

That previous used to work, and now instead will fail.

I'm asking just to be sure because if there are other semantical
differences that could cause the two tests failing, this means I'm
missing something.

Btw at the same time I'm investigating your new commits to see if I
can find an answer myself but I'm not a Python wizard! :-)

Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 1:25 PM
Looking at the code it looks like my assumption is correct, also the
commit mostly *remove* codes that is always a good sign! :-)
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Marc Gravell 11/16/12 1:44 PM
Can I clarify: what is the reason that the "multi" isn't cleared on exec? In what scenario is it desirable to call "exec" and have it neither actually execute nor discard? To me the logical thing is for an exec (when there were errors) discard automatically.

Or have I misunderstood?

Marc
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 2:38 PM
Marc you are 100% right, only concern is: backward compatibility.

Before when EXEC failed because of maxmemory (the only reason it could
fail AFAIK) the transaction was not cleared.

Cheers,
Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 2:40 PM
p.s. However it is not *so* bad now that I changed the error message ;)

Yep that looks silly but actually documents the behavior every time it
happens, that is a good thing:

redis 127.0.0.1:6379> multi
OK
redis 127.0.0.1:6379> blabla
(error) ERR unknown command 'blabla'
redis 127.0.0.1:6379> exec
(error) EXECABORT Can't EXEC because of previous errors. Use DISCARD.
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Marc Gravell 11/16/12 3:52 PM
It does, however, mean we can't pipeline anything past an EXEC, because the next command can unexpectedly become part of the prior transaction which can **only** now be aborted via discard. I appreciate it is more a "realisation of old behaviour" rather than intentional, but it... is very vexing. Adding code to cope with this scenario necessitates a block in the pipe for every EXEC.

Marc
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Marc Gravell 11/16/12 3:53 PM
Or have I misunderstand? (Sorry if I have)

Marc
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/16/12 4:04 PM
Ooops... that's right, and inacceptable.

It is possible to be able to pipeline multiple MULTI/EXEC blocks without issues.

Ok... reverting to the back behavior tomorrow morning. Thanks for spotting this!
Btw the branch was still not merged fortunately.

This also means we'll be a bit less backward compatible, but basically
there is nothing we can do about this I guess.
(Well only code encountering an error during the pipeline breaks,
fortunately, and btw EXEC could already return an error in the past).

Salvatore
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/17/12 4:06 AM
Branch updated again with the new behavior (discard on failed exec)
and tests. Thanks!
Re: Regressions in 2.6 when out of memory & writing during multi-exec? Salvatore Sanfilippo 11/22/12 1:52 AM
Hi,

Just to let you know that the multi-exec branch is now merged into
2.6, 2.8 and unstable branches.

I'll release 2.6.5 with the fix today.

Thanks for  the help,
Salvatore
More topics »