BGSAVE causing CPU 100% spike

4,314 views
Skip to first unread message

Matt Todd

unread,
Dec 1, 2009, 1:01:01 AM12/1/09
to Redis DB
Hey, is it expected to see a BGSAVE taking 100% of the CPU to persist to disk?

I'm running on an EC2 instance and using the default persistence settings, and it usually happens every 5 minutes (since it's just under 10,000 changes per minute).

Once it forks, the second redis-server shoots up to 100% CPU usage and the server load (on a medium CPU instance) starts at 0.3 and gets as high as 0.76 over the span it takes to write. Obviously, I want to keep this number down as low as possible, but I need the data persisted.

The redis data is about 250mb to 300mb, and rotates completely every hour +1 (as in, the data from hour 1 is completely gone by the end of the 2nd hour, due to expiration).

Is there any way to control how much CPU is being taken by the Redis server?

Obviously, as scaling becomes a problem, we'll want to move the Redis server onto it's own dedicated machine (and slave), but for now we're having it share resources with some background jobs and a message queue broker and we'd like to prevent load from getting too high.

Thanks!

Matt



--
Matt Todd
Highgroove Studios
www.highgroove.com
cell: 404-314-2612
blog: maraby.org

Scout - Web Monitoring and Reporting Software
www.scoutapp.com

Salvatore Sanfilippo

unread,
Dec 1, 2009, 5:29:27 AM12/1/09
to redi...@googlegroups.com
On Tue, Dec 1, 2009 at 7:01 AM, Matt Todd <mt...@highgroove.com> wrote:
> Hey, is it expected to see a BGSAVE taking 100% of the CPU to persist to
> disk?

Hello, this is the intended behavior actually, but 100% of a single process.

> I'm running on an EC2 instance and using the default persistence settings,
> and it usually happens every 5 minutes (since it's just under 10,000 changes
> per minute).
> Once it forks, the second redis-server shoots up to 100% CPU usage and the
> server load (on a medium CPU instance) starts at 0.3 and gets as high as
> 0.76 over the span it takes to write. Obviously, I want to keep this number
> down as low as possible, but I need the data persisted.

Because this happens on a single core, even if the load goes high, it
is actually not a big problem in practice. But for you this is not
desirable I think you should switch to Redis 1.1 append only file to
persist.

> The redis data is about 250mb to 300mb, and rotates completely every hour +1
> (as in, the data from hour 1 is completely gone by the end of the 2nd hour,
> due to expiration).
> Is there any way to control how much CPU is being taken by the Redis server?

Currently not with snapshotting, it will just try to serialized data
on disk ASAP, because the more time it takes, the more the two
processes will have differences in memory pages, the more physical
memory will be used.
I think you should try to check if apart from the load figure this is
having bad effects on the real-world system performance. I don't know
on EC2, but on a normal box with 2 or 4 cores this is usually not a
problem as reported by many users.

> Obviously, as scaling becomes a problem, we'll want to move the Redis server
> onto it's own dedicated machine (and slave), but for now we're having it
> share resources with some background jobs and a message queue broker and
> we'd like to prevent load from getting too high.

You'll find that Redis performances outside EC2 are impressive
compared to EC2 :) Even with a *very* entry level box.
Btw the load will get an increment by 1 as there is always a process
willing to run (the saving process), but it's not a concern especially
of this EC2 instance is running *only* redis. A much bigger problem I
guess is that there is not enough disk bandwidth if there are other
disk-intensive processes running on the system.

Btw at least in theory it's trivial to add some kind of config
parameter in order to make the saving child less aggressive CPU-wise,
but I think after all it's not a good idea most of the times.

Cheers,
Salvatore

--
Salvatore 'antirez' Sanfilippo
http://invece.org

"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay

Matt Todd

unread,
Dec 1, 2009, 1:20:18 PM12/1/09
to redi...@googlegroups.com
Thanks for the response. Not surprised at all, in fact it's more or less what I expected.

I'm keeping my eye on it, obviously, but I certainly plan to separate this onto a separate instance once the need is big enough.

Thanks,

Matt




--

You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post to this group, send email to redi...@googlegroups.com.
To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.


Salvatore Sanfilippo

unread,
Dec 1, 2009, 1:41:34 PM12/1/09
to redi...@googlegroups.com
On Tue, Dec 1, 2009 at 7:20 PM, Matt Todd <chio...@gmail.com> wrote:
> Thanks for the response. Not surprised at all, in fact it's more or less
> what I expected.
> I'm keeping my eye on it, obviously, but I certainly plan to separate this
> onto a separate instance once the need is big enough.

You are welcome Matt, btw if your app is not very write-intensive
append-only files of 1.1 can be a nice solution to save CPU cycles. I
think I'll switch most of my Redis instances to append only once 1.1
is released as stable.

Matt Todd

unread,
Dec 1, 2009, 2:08:08 PM12/1/09
to redi...@googlegroups.com
You are welcome Matt, btw if your app is not very write-intensive
append-only files of 1.1 can be a nice solution to save CPU cycles. I
think I'll switch most of my Redis instances to append only once 1.1
is released as stable.
 
Actually, it's about 95% write and 5% read, haha.

I looked at AOF but I don't know if it'll actually help me out. I specifically have to reread about AOF and expiration.

Matt

Salvatore Sanfilippo

unread,
Dec 1, 2009, 5:35:06 PM12/1/09
to redi...@googlegroups.com
On Tue, Dec 1, 2009 at 8:08 PM, Matt Todd <chio...@gmail.com> wrote:
>> You are welcome Matt, btw if your app is not very write-intensive
>> append-only files of 1.1 can be a nice solution to save CPU cycles. I
>> think I'll switch most of my Redis instances to append only once 1.1
>> is released as stable.
>
>
> Actually, it's about 95% write and 5% read, haha.
> I looked at AOF but I don't know if it'll actually help me out. I
> specifically have to reread about AOF and expiration.
> Matt

Hello Matt, if you have a lot of writes and very few reads AOF is not
a good idea, but it handels expires without problems (there is a new
command, EXPIREAT, specifically created to handle AOF+expires better,
EXPIREs are automatically translated into EXPIREAT for the AOF
output).

Please, as the documentation is not so strong, feel free to ask
questions to me for any kind of doubt!
Reply all
Reply to author
Forward
0 new messages