Test failure on AsyncIo/CapabilityPipeBlockedSendStream

11 views
Skip to first unread message

John Wang

unread,
Feb 1, 2021, 4:53:46 PMFeb 1
to Cap'n Proto
I'm seeing a failure on the following test. Did a quick search of the group and didn't see any mention of it:

[ FAIL ] kj/async-io-test.c++:281: legacy test: AsyncIo/CapabilityPipeBlockedSendStream (52576 μs)

This result is consistent across 3 runs of `make -j4 check`. My environment is Ubuntu 16.04 and gcc 5.4.0, and I'm building `release-0.8.0` from git. Let me know if I can provide any other info.

Best,
John

Kenton Varda

unread,
Feb 1, 2021, 5:12:35 PMFeb 1
to John Wang, Cap'n Proto
Hi John,

Was there anything in the log before the fail line? Usually there should be some sort of error message describing what went wrong, and maybe a stack trace.

-Kenton

--
You received this message because you are subscribed to the Google Groups "Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email to capnproto+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/capnproto/3820e497-c121-413c-9651-d3af19e068c3n%40googlegroups.com.

John Wang

unread,
Feb 1, 2021, 5:14:25 PMFeb 1
to Kenton Varda, Cap'n Proto
Ah sorry, missed that bit.
kj/async-io-unix.c++:530: failed: ::sendmsg(fd, &msg, 0): Too many references: cannot splice
stack: 7fa89b08ceb2 7fa89b07af3b 5bdb1d 7fa89ae084dd

John Wang
Lead Robotics Engineer
May Mobility

Kenton Varda

unread,
Feb 1, 2021, 5:43:51 PMFeb 1
to John Wang, Cap'n Proto
It looks like this error occurs when the number of file descriptors sent on a unix socket exceeds `ulimit -n`.

Admittedly, the test tries to send lots of file descriptors over a unix socket, stopping only when the socket buffer seems to be full. So depending on the system configuration, it could indeed cause this error.

On my system, `ulimit -n` returns 1024, while the test manages to send 278 file descriptors before filling the buffer.

What does `ulimit -n` show on your system?

-Kenton

John Wang

unread,
Feb 1, 2021, 6:37:13 PMFeb 1
to Kenton Varda, Cap'n Proto
My system also returns 1024.

John Wang
Lead Robotics Engineer
May Mobility

Kenton Varda

unread,
Feb 1, 2021, 7:32:27 PMFeb 1
to John Wang, Cap'n Proto
Hmm, well I'm not sure what to say here. Maybe your unix sockets have larger buffers for some reason? Are you running a weird kernel version? What architecture is this?

In any case, this error doesn't indicate a problem with Cap'n Proto, only a problem with the test, so you should be safe ignoring it.

-Kenton

John Wang

unread,
Feb 1, 2021, 7:58:14 PMFeb 1
to Kenton Varda, Cap'n Proto
Ah, yep that was it. I had net.core.rmem_default and net.core.wmem_default set to a rather high value for another application. Reverting them to normal fixed the test. Thanks!

John Wang
Lead Robotics Engineer
May Mobility

Kenton Varda

unread,
Feb 1, 2021, 9:49:05 PMFeb 1
to John Wang, Cap'n Proto
Ah, cool.

FWIW, this is still just an issue with the design of this particular test case. Having socket buffer set large should be no problem in general. I suppose this test case should probably be rethought but I'll probably wait for a second complaint before I mess with it.

-Kenton
Reply all
Reply to author
Forward
0 new messages