Hi All,
I've been measuring the performance of OSv memcached compared to Linux. I built and ran OSv as follows:
osv git tree @0df9862
apps tree @5e6b9ae
$ make image=memcached mode=release
$ ./scripts/run.py --novnc -m2G -c1 -nv -e "memcached -u root -t1 -m1024"
$ qemu-system-x86_64 -m 2G -cpu host -enable-kvm --nographic -smp cpus=1 \
-netdev tap,ifname=tap0,id=vlan1,script=no,downscript=no,vhost=on \
-device virtio-net-pci,netdev=vlan1,mac=00:11:22:33:44:55 ...(irrelevant devices)...
(I run the qemu line by hand rather than through the run script)
I run a Linux guest similarly. I use the mutilate benchmark running on a separate machine connected to the host via a direct 10gbe cable. The benchmark which measures latency as a function of throughput. It's been configured to use the Facebook ETC workload. Here are my results:
http://i.imgur.com/eQXZbRa.png
I wonder why the performance is worse on OSv as compared to Linux. Your USENIX paper shows better memcached throughput (albeit, with UDP). Did I misconfigure the OSv system or is this a known phenomenon? I also ran the same test using multicore, and found OSv performance degrades whereas Linux's scales. I suspect this is due to the lack of multiqueue support in your virtio-net driver (which Linux supports).