Redis under virtualization

490 views
Skip to first unread message

Adrián Navarro

unread,
Apr 14, 2011, 6:12:41 AM4/14/11
to Redis DB
Hello,

Until now I've been running redis under a vmware esx system. The
memory allocation provided by esx swaps some memory to ssd disks and
as redis is allocating a lot of memory there is a bottleneck that
causes redis to be slow and use a lot of cpu.

Now I'm moving to a dedicated server but I'd like to keep using
virtualization (easier on backups plus I can use different
environments on the same machine). I have to choose between Xen
(Citrus) or OpenVZ (Proxmox).

Does anyone have any experiences in this topic?

Thank you,
A.

Javier Guerra Giraldez

unread,
Apr 14, 2011, 10:41:17 AM4/14/11
to redi...@googlegroups.com
On Thu, Apr 14, 2011 at 5:12 AM, Adrián Navarro <adr...@navarro.at> wrote:
> I have to choose between Xen
> (Citrus) or OpenVZ (Proxmox).

While I have only used KVM and LXC on production so far, I've tested
and analyzed Xen (back in 3.1 days) and OpenVZ, so i think i can clear
up a little bits:

About performance:

- Xen is the most 'hard' hypervisor of all these, meaning that what
you allocate for a VM, you will get. the performance is very solid
and constant once you get it right. Apart from disk and network
bottlenecks in extreme cases, load on a VM won't affect another one.

- OpenVZ (and LXC) have the lowest overhead and most flexibility in
terms of resource allocation. it's usual to give lower and upper
bounds for CPU, RAM, I/O, etc. That means you can have higher
utilization ratios; but also that in peak times you'll get less than
what you'd like.

about swapping:

- Xen doesn't swap, the hosted OS will. it's common to provide two
virtual block devices for each VM, one for the filesystem and one for
swapping. If IO bandwith is well controlled, you won't get surprises.

- OpenVZ runs all VMs on a single kernel, and swapping is done at that
level. You lose fine control from within a single VM of which
processes will be swapped, but since there's no other 'lower' level
swapping, you won't find crucial kernel structures suddenly being
paged out. (like is so common on VMWare settings where you allow it
to swap VM's memory). Also, since the single kernel has a total
picture of memory usage, it can better optimize the whole set of
resources.


in the end, i think you'll be far better served with either than with
VMWare. the choice between them is more of personal taste: if you
want to set it up and be sure that it will stay without hiccups, go
with Xen and sleep peacefully. if you want to intelligently adapt
resource allocation to a wide range of changing scenarios, try OpenVZ
and keep a close eye on how it reacts.

--
Javier

Adrián Navarro

unread,
Apr 15, 2011, 9:02:55 PM4/15/11
to redi...@googlegroups.com, Javier Guerra Giraldez
Thank you!

I've chosen xen, mostly because I've already used it before…

Right now redis is taking 25-30% of the cpu (a lot less than over 100%
with esx):
552 service 20 0 483M 374M 812 R 25.0 36.7 4h13:34
/usr/local/bin/redis-server /etc/redis.conf

The vm has 2 vcpus and the load avg inside the VM ranges between 0.30
and 0.50. The redis server is very responsive (and that's awesome!)
but I'm still wondering if it is normal to have such cpu load.

Other (almost idling) vms are okay (0.00) and *surprisingly* the dom0
(host) is really really calm:
03:02:01 up 13:11, 1 user, load average: 0.01, 0.00, 0.00

> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>

--
Adrián Navarro ~ +34 608-83-10-94 ~ http://adrian.navarro.at/

Adrián Navarro

unread,
Apr 16, 2011, 6:48:44 PM4/16/11
to redi...@googlegroups.com, Javier Guerra Giraldez
Nevermind. I've disabled VM and rdb compresion and now redis stays
outside of the htop preview… nice!

I've also remarked that memory has jumped from 200mb to 470mb, but I
think I still can deal with that.

Reply all
Reply to author
Forward
0 new messages