Thanks for reporting these on the list. Very interesting results!
We have a lock contention issue in the TCP/IP stack with routing and
ARP entries and issue in TX throughput that we're currently working
on. You might be affected by those.
BTW, if you are able to dig in little bit deeper, you can use
"virt-stat" to produce an overall picture of the workload for both
Linux and OSv:
https://github.com/penberg/virt-stat
You can then also try out the built-in sampling profiler in OSv:
https://github.com/cloudius-systems/osv/wiki/Trace-analysis-using-trace.py#cpu-sampler
If it's a lock contention issue, you can use the lock tracing infrastructure:
https://github.com/cloudius-systems/osv/wiki/Debugging-Excessive-and-Contended-Mutex-Locks
--
You received this message because you are subscribed to the Google Groups "OSv Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osv-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I think an Apple's to Kiwis comparison would be to use PCI passthrough or a virtual function on an SR-IOV NIC. I may need to get some hardware... e.g. a couple of Intel X540-T2's however I don't think my BIOS supports SR-IOV, and I am on a budget, so let me know if you have spare NICs and I'll PM you my postal address ;-). My hardware support VTd. The CPU on the target system is Nehalem. I might be better to swap around the client and server and run the http server and OSv on the Westmere system. Is there a tangible difference in VT support between Nehalem and Westmere that would make any difference to my tests? I also believe I am bandwidth limited. The server can do 160,000reqs/sec on loopback, similar to nginx...
--
What kind of error are you seeing? Reading the code, SOCK_DGRAM seemsto be wired in parts of the TCP stack. Perhaps we missed something.
I believe I hit this assertion (however I changed to SOCK_STREAM and am using send/recv instead of sendmsg/recvmsg).osv/libc/af_local.cc:100int socketpair_af_local(int type, int proto, int sv[2]){assert(type == SOCK_STREAM);
Your analysis is spot on. We have delivered extremely good performance in some workloads already, which made us very happy. But some of the best architectural features will only be leveraged long term. We are actively working in some of them. We also lack maturity in some code paths and every one in a while we find out some very low numbers that we try to address as quick as we can =)
I think an Apple's to Kiwis comparison would be to use PCI passthrough or a virtual function on an SR-IOV NIC. I may need to get some hardware... e.g. a couple of Intel X540-T2's however I don't think my BIOS supports SR-IOV, and I am on a budget, so let me know if you have spare NICs and I'll PM you my postal address ;-). My hardware support VTd. The CPU on the target system is Nehalem. I might be better to swap around the client and server and run the http server and OSv on the Westmere system. Is there a tangible difference in VT support between Nehalem and Westmere that would make any difference to my tests? I also believe I am bandwidth limited. The server can do 160,000reqs/sec on loopback, similar to nginx...The main problem with the NICs, is that we lack the drivers and support. This is certainly in our plans, but the human time to actually code it is harder to find than the actual hardware =) (we are all tackling other priorities ATM).