what it means collapsing exactly? as in crashing, not replying to
queries for N seconds, and so forth.
What is the Redis process RSS?
How many queries per second the server receives, and how much time it
takes for the BGSAVE to complete?
It seems like that you run out of memory during a BGSAVE.
Cheers,
Salvatore
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
Also in general 2.2 will use less memory. Switching is highly recommended.
Cheers,
Salvatore
--
Ok that is know as the OOM killer I think.
First obvious question is, what is the overcommit memory policy
currently set in the system?
> Currently it receives anywhere around 2000 queries per second. The graph
> attached show the peaks for bgsaves and the time it takes. I have seen an
> average of 30 seconds per bgsave.
Mostly reads or mostly writes?
> Yes, we do run out of memory. And we see this in the system log messages
> after redis crashes.
I think your problem can be the OOM killer because the overcommit
setting is wrong, otherwise you should experience other problems, like
the server starting to be slower and slower.
Probably setting the value to the right value suggested by Redis in
the first lines of the log once it is restarted will fix the problem,
but upgrading to 2.2 will help in general, especially if your data set
you have small lists, or small sets of integers, or alike.
The more info you can provide on your dataset and queries, the more we can help.
Cheers,
Salvatore
On Thu, Jan 27, 2011 at 6:44 PM, Ity <ity...@gmail.com> wrote:Ok that is know as the OOM killer I think.
> By collapsing I meant that Redis is crashing, the process dies.
> RSS for redis-server right now - 7.4g (it ranges anywhere between 7g and
> 12g).
First obvious question is, what is the overcommit memory policy
currently set in the system?
Mostly reads or mostly writes?
> Currently it receives anywhere around 2000 queries per second. The graph
> attached show the peaks for bgsaves and the time it takes. I have seen an
> average of 30 seconds per bgsave.
> overcommit_memory is set to 1
So definitely not our problem.
Strange that the OOM killer will kill you before you see bad
performances due to swapping.
> In the last hour, we have had 250,000 reads, 100,000 writes each from 8
> different clients which works out to be about
> 2.000,000 reads and 800,000 writes in total approximately. This works out to
> be about 800 qps
Do you have small lists, or small sets composed of integers?
If either of this is true, you can save tons of memory with 2.2,
bringing the memory limit again to a low value.
Anyway given your reads, with 2.2 this is going to work better anyway.
I strongly suggest upgrading, it is completely backward compatible
with 2.0 UNLESS you are using an old client that only supports the old
protocol.
Cheers,
Salvatore
So definitely not our problem.
Strange that the OOM killer will kill you before you see bad
performances due to swapping.
Do you have small lists, or small sets composed of integers?
> In the last hour, we have had 250,000 reads, 100,000 writes each from 8
> different clients which works out to be about
> 2.000,000 reads and 800,000 writes in total approximately. This works out to
> be about 800 qps
If either of this is true, you can save tons of memory with 2.2,
bringing the memory limit again to a low value.
Anyway given your reads, with 2.2 this is going to work better anyway.
I strongly suggest upgrading, it is completely backward compatible
with 2.0 UNLESS you are using an old client that only supports the old
protocol.
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
open source developer - VMware
http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele
>> Do you have small lists, or small sets composed of integers?
>> If either of this is true, you can save tons of memory with 2.2,
>> bringing the memory limit again to a low value.
>
> So our data is pretty much
> Url(Key) -> List of values (which might have strings, integers, url again
> etc)
Oh, every interesting. Are most of this lists less than a few hundreds
of elements, but more than ... 5?
If this is true Redis 2.2 will use something like just 1/5 of the memory.
But please tell me the average list size and I'll reply with the right
config option.
>> Anyway given your reads, with 2.2 this is going to work better anyway.
>> I strongly suggest upgrading, it is completely backward compatible
>> with 2.0 UNLESS you are using an old client that only supports the old
>> protocol.
>
> We are using the most recent version of Jedis (1.5.1) as the client. I
> wanted to also point out that we are not using the vm option. And as you
> said, we will try and move to 2.2 today and will keep you posted on how that
> goes.
I think Jedis is fine.
Cheers,
Salvatore
On Thu, Jan 27, 2011 at 7:16 PM, Ity <ity...@gmail.com> wrote:Oh, every interesting. Are most of this lists less than a few hundreds
>> Do you have small lists, or small sets composed of integers?
>> If either of this is true, you can save tons of memory with 2.2,
>> bringing the memory limit again to a low value.
>
> So our data is pretty much
> Url(Key) -> List of values (which might have strings, integers, url again
> etc)
of elements, but more than ... 5?
If this is true Redis 2.2 will use something like just 1/5 of the memory.
But please tell me the average list size and I'll reply with the right
config option.
>
> The list size at the moment is <=7 but it might increase in the future. On
> another note, is redis more efficient if we store the whole list serialized?
Oh, you don't need to do anything at all: just upgrade to 2.2 and
you'll see an impressive improvement in memory usage.
> Also, I wanted to mention that we do not have swap on the EC2 machine that
> we are running redis on.
Yes, this is why overcommit was completely useless.
With swap performances will suffer when Redis is out of memory during
BGSAVE but will not be killed by the OOM.