Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Glibc recvmsg from kernel netlink socket hangs forever

1,047 views
Skip to first unread message

Steven Schlansker

unread,
Sep 24, 2015, 8:29:41 PM9/24/15
to linux-...@vger.kernel.org
Hello linux-kernel,

I write to you on behalf of many developers at my company, who
are having trouble with their applications endlessly locking up
inside of libc code, with no hope of recovery.

Currently it affects our Mono and Node processes mostly, and the
symptoms are the same: user code invokes getaddrinfo, and libc
attempts to determine whether ipv4 or ipv6 is appropriate, by using
the RTM_GETADDR netlink message. The write into the netlink socket
succeeds, and it immediately reads back the results ... and waits
forever. The read never returns. The stack looks like this:

#0 0x00007fd7d8d214ad in recvmsg () at ../sysdeps/unix/syscall-template.S:81
#1 0x00007fd7d8d3e44d in make_request (fd=fd@entry=13, pid=1) at ../sysdeps/unix/sysv/linux/check_pf.c:177
#2 0x00007fd7d8d3e9a4 in __check_pf (seen_ipv4=seen_ipv4@entry=0x7fd7d37fdd00, seen_ipv6=seen_ipv6@entry=0x7fd7d37fdd10,
in6ai=in6ai@entry=0x7fd7d37fdd40, in6ailen=in6ailen@entry=0x7fd7d37fdd50) at ../sysdeps/unix/sysv/linux/check_pf.c:341
#3 0x00007fd7d8cf64e1 in __GI_getaddrinfo (name=0x31216e0 "mesos-slave4-prod-uswest2.otsql.opentable.com", service=0x0,
hints=0x31216b0, pai=0x31f09e8) at ../sysdeps/posix/getaddrinfo.c:2355
#4 0x0000000000e101c8 in uv__getaddrinfo_work (w=0x31f09a0) at ../deps/uv/src/unix/getaddrinfo.c:102
#5 0x0000000000e09179 in worker (arg=<optimized out>) at ../deps/uv/src/threadpool.c:91
#6 0x0000000000e16eb1 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
#7 0x00007fd7d8ff3182 in start_thread (arg=0x7fd7d37fe700) at pthread_create.c:312
#8 0x00007fd7d8d2047d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(libuv is part of Node and makes DNS lookups "asynchronous" by having
a thread pool in the background working)

The applications will run for hours or days successfully, until eventually hanging with
no apparent pattern or cause. And once this hang happens it hangs badly, because
check_pf is holding a lock during the problematic recvmsg call.

I raised this issue on the libc-help mailing list, but I'm hoping that lkml will
have a higher number of people familiar with netlink that may better offer advice.
The original thread is here:
https://sourceware.org/ml/libc-help/2015-09/msg00014.html

Looking at the getaddrinfo / check_pf source code:
https://fossies.org/dox/glibc-2.22/sysdeps_2unix_2sysv_2linux_2check__pf_8c_source.html

146 if (TEMP_FAILURE_RETRY (__sendto (fd, (void *) &req, sizeof (req), 0,
147 (struct sockaddr *) &nladdr,
148 sizeof (nladdr))) < 0)
149 goto out_fail;
150
151 bool done = false;
152
153 bool seen_ipv4 = false;
154 bool seen_ipv6 = false;
155
156 do
157 {
158 struct msghdr msg =
159 {
160 (void *) &nladdr, sizeof (nladdr),
161 &iov, 1,
162 NULL, 0,
163 0
164 };
165
166 ssize_t read_len = TEMP_FAILURE_RETRY (__recvmsg (fd, &msg, 0));
167 if (read_len <= 0)
168 goto out_fail;
169
170 if (msg.msg_flags & MSG_TRUNC)
171 goto out_fail;
172

I notice that there is possibility that if messages are dropped either on send
or receive side, maybe this code will hang forever? The netlink(7) man page makes
me slightly worried:

> Netlink is not a reliable protocol. It tries its best to deliver a message to its destination(s), but may drop messages when an out-of-memory condition or other error occurs.
> However, reliable transmissions from kernel to user are impossible in any case. The kernel can't send a netlink message if the socket buffer is full: the message will be dropped and the kernel and the user-space process will no longer have the same view of kernel state. It is up to the application to detect when this happens (via the ENOBUFS error returned by recvmsg(2)) and resynchronize.


I have taken the glibc code and created a simple(r) program to attempt to reproduce this issue.
I inserted some simple polling between the sendto and recvmsg calls to make the failure case more evident:

struct pollfd pfd;
pfd.fd = fd;
pfd.events = POLLIN;
pfd.revents = 0;

int pollresult = poll(&pfd, 1, 1000);
if (pollresult < 0) {
perror("glibc: check_pf: poll");
abort();
} else if (pollresult == 0 || pfd.revents & POLLIN == 0) {
fprintf(stderr, "[%ld] glibc: check_pf: netlink socket read timeout\n", gettid());
abort();
}

I have placed the full source code and strace output here:
https://gist.github.com/stevenschlansker/6ad46c5ccb22bc4f3473

The process quickly sends off hundreds of threads which sit in a
loop attempting this RTM_GETADDR message exchange.

The code may be compiled as "gcc -o pf_dump -pthread pf_dump.c"

An example invocation that quickly fails:

root@24bf2e440b5e:/# strace -ff -o pfd ./pf_dump
[3700] exit success
glibc: check_pf: netlink socket read timeout
Aborted (core dumped)

Interestingly, this seems to be very easy to reproduce using pthreads, but much less
common with fork() or clone()d threads. I'm not sure if this is just an artifact
of how I am testing or an actual clue, but I figured I'd mention it.

I have tested this program on vanilla kernels 4.0.4 and 4.2.1 -- the 4.0.4 version
reliably crashes, but I am having trouble reproducing on 4.2.1

So usually I would upgrade to 4.2.1 and be happy, except we ran into serious problems
with 4.1.2 and are now a little shy about upgrading:

https://bugzilla.xamarin.com/show_bug.cgi?id=29212

So my questions from here are:

* Is this glibc code correct?
* What are the situations where a recvmsg from a netlink socket can hang as it does here?
* Is the potential "fix" in 4.2.1 due to any particular commit? I checked the changelogs and nothing caught my eye.

We'll be testing out 4.2.1 more thoroughly over the coming days but I am hoping someone here
can shed some light on our problem.

Thanks for reading,
Steven

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Guenter Roeck

unread,
Sep 25, 2015, 12:37:11 AM9/25/15
to Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, Herbert Xu
http://comments.gmane.org/gmane.linux.network/363085

might explain your problem.

I thought this was resolved in 4.1, but it looks like the problem still persists
there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are still
affected. I don't know if there have been any relevant changes in 4.2.

Copying Herbert and Eric for additional input.

Guenter

Herbert Xu

unread,
Sep 25, 2015, 12:59:46 AM9/25/15
to Guenter Roeck, Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>
> http://comments.gmane.org/gmane.linux.network/363085
>
> might explain your problem.
>
> I thought this was resolved in 4.1, but it looks like the problem still persists
> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are still
> affected. I don't know if there have been any relevant changes in 4.2.
>
> Copying Herbert and Eric for additional input.

There was a separate bug discovered by Tejun recently. You need
to apply the patches

https://patchwork.ozlabs.org/patch/519245/
https://patchwork.ozlabs.org/patch/520824/

There is another follow-up but it shouldn't make any difference
in practice.

Cheers,
--
Email: Herbert Xu <her...@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Guenter Roeck

unread,
Sep 25, 2015, 1:34:30 AM9/25/15
to Herbert Xu, Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
Herbert,

On 09/24/2015 09:58 PM, Herbert Xu wrote:
> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>
>> http://comments.gmane.org/gmane.linux.network/363085
>>
>> might explain your problem.
>>
>> I thought this was resolved in 4.1, but it looks like the problem still persists
>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are still
>> affected. I don't know if there have been any relevant changes in 4.2.
>>
>> Copying Herbert and Eric for additional input.
>
> There was a separate bug discovered by Tejun recently. You need
> to apply the patches
>
> https://patchwork.ozlabs.org/patch/519245/
> https://patchwork.ozlabs.org/patch/520824/
>
I assume this is on top of mainline ?

> There is another follow-up but it shouldn't make any difference
> in practice.
>

Any idea what may be needed for 4.1 ?
I am currently trying https://patchwork.ozlabs.org/patch/473041/,
but I have no idea if that will help with the problem we are seeing there.

Thanks,
Guenter

Herbert Xu

unread,
Sep 25, 2015, 11:55:33 AM9/25/15
to Guenter Roeck, Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
On Thu, Sep 24, 2015 at 10:34:10PM -0700, Guenter Roeck wrote:
>
> Any idea what may be needed for 4.1 ?
> I am currently trying https://patchwork.ozlabs.org/patch/473041/,

This patch should not make any difference on 4.1 and later because
4.1 is where I rewrote rhashtable resizing and it should work (or
if it is broken then the latest kernel should be broken too).

> but I have no idea if that will help with the problem we are seeing there.

Having looked at your message agin I don't think the issue I
alluded to is relevant since the symptom there ought to be a
straight kernel lock-up as opposed to just a user-space one because
you will end up with the kernel sending a message to itself.

And the fact that 4.2 works is more indicative as the bug is
present in both 4.1 and 4.2.

I'll try to reproduce this in 4.1 as time permits but no promises.

Guenter Roeck

unread,
Sep 25, 2015, 12:14:43 PM9/25/15
to Herbert Xu, Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
On 09/25/2015 08:55 AM, Herbert Xu wrote:
> On Thu, Sep 24, 2015 at 10:34:10PM -0700, Guenter Roeck wrote:
>>
>> Any idea what may be needed for 4.1 ?
>> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
>
> This patch should not make any difference on 4.1 and later because
> 4.1 is where I rewrote rhashtable resizing and it should work (or
> if it is broken then the latest kernel should be broken too).
>
Yes, applying (only) the above patch to 4.1 didn't help.

>> but I have no idea if that will help with the problem we are seeing there.
>
> Having looked at your message agin I don't think the issue I
> alluded to is relevant since the symptom there ought to be a
> straight kernel lock-up as opposed to just a user-space one because
> you will end up with the kernel sending a message to itself.
>
> And the fact that 4.2 works is more indicative as the bug is
> present in both 4.1 and 4.2.
>
> I'll try to reproduce this in 4.1 as time permits but no promises.
>

I applied your patches (and a few additional netlink changes from 4.2)
to our 4.1 branch. I'll let you know if it makes a difference for us.

Thanks,
Guenter

Steven Schlansker

unread,
Sep 25, 2015, 5:37:25 PM9/25/15
to Guenter Roeck, Herbert Xu, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org

On Sep 24, 2015, at 10:34 PM, Guenter Roeck <li...@roeck-us.net> wrote:

> Herbert,
>
> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>>
>>> http://comments.gmane.org/gmane.linux.network/363085
>>>
>>> might explain your problem.
>>>
>>> I thought this was resolved in 4.1, but it looks like the problem still persists
>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are still
>>> affected. I don't know if there have been any relevant changes in 4.2.
>>>
>>> Copying Herbert and Eric for additional input.
>>
>> There was a separate bug discovered by Tejun recently. You need
>> to apply the patches
>>
>> https://patchwork.ozlabs.org/patch/519245/
>> https://patchwork.ozlabs.org/patch/520824/
>>
> I assume this is on top of mainline ?
>
>> There is another follow-up but it shouldn't make any difference
>> in practice.
>>
>
> Any idea what may be needed for 4.1 ?
> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
> but I have no idea if that will help with the problem we are seeing there.

Thank you for the patches to try, I'll build a kernel with them early next week
and report back. It sounds like it may not match my problem exactly so we'll
see.

In the meantime, I also observed the following oops:

[ 1709.620092] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
[ 1709.624058] BUG: unable to handle kernel paging request at ffffea001dbef3c0
[ 1709.624058] IP: [<ffffea001dbef3c0>] 0xffffea001dbef3c0
[ 1709.624058] PGD 78f7dc067 PUD 78f7db067 PMD 800000078ec001e3
[ 1709.624058] Oops: 0011 [#1] SMP
[ 1709.624058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1709.624058] CPU: 4 PID: 19714 Comm: pf_dump Tainted: G E 4.0.4 #1
[ 1709.624058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1709.624058] task: ffff880605a18000 ti: ffff8805f9358000 task.ti: ffff8805f9358000
[ 1709.624058] RIP: 0010:[<ffffea001dbef3c0>] [<ffffea001dbef3c0>] 0xffffea001dbef3c0
[ 1709.624058] RSP: 0018:ffff8805f935bbc0 EFLAGS: 00010246
[ 1709.624058] RAX: ffffea001dbef3c0 RBX: 0000000000000007 RCX: 0000000000000000
[ 1709.624058] RDX: 0000000000002100 RSI: ffff8805f992f308 RDI: ffff8806622f6b00
[ 1709.624058] RBP: ffff8805f935bc08 R08: 0000000000001ec0 R09: 0000000000002100
[ 1709.624058] R10: 0000000000000000 R11: ffff880771003200 R12: ffff8806622f6b00
[ 1709.624058] R13: 0000000000000002 R14: ffffffff8239e238 R15: ffff8805f992f308
[ 1709.624058] FS: 00007f0735f29700(0000) GS:ffff88078fc80000(0000) knlGS:0000000000000000
[ 1709.624058] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1709.624058] CR2: ffffea001dbef3c0 CR3: 00000005f7e88000 CR4: 00000000001407e0
[ 1709.624058] Stack:
[ 1709.624058] ffffffff81735ca2 0000000000000000 ffff8805f992f348 ffff88076b491400
[ 1709.624058] ffff8805f992f000 ffff8806622f6b00 0000000000000ec0 ffff8805f992f308
[ 1709.624058] ffff88065ffb0000 ffff8805f935bc38 ffffffff8176028a ffff8805f992f000
[ 1709.624058] Call Trace:
[ 1709.624058] [<ffffffff81735ca2>] ? rtnl_dump_all+0x122/0x1a0
[ 1709.624058] [<ffffffff8176028a>] netlink_dump+0x11a/0x2d0
[ 1709.624058] [<ffffffff81760625>] netlink_recvmsg+0x1e5/0x360
[ 1709.624058] [<ffffffff811b97c9>] ? kmem_cache_free+0x1b9/0x1d0
[ 1709.624058] [<ffffffff8170b33f>] sock_recvmsg+0x6f/0xa0
[ 1709.624058] [<ffffffff8170c1a4>] ___sys_recvmsg+0xe4/0x200
[ 1709.624058] [<ffffffff811f5305>] ? __fget_light+0x25/0x70
[ 1709.624058] [<ffffffff8170cbe2>] __sys_recvmsg+0x42/0x80
[ 1709.624058] [<ffffffff81961010>] ? int_check_syscall_exit_work+0x34/0x3d
[ 1709.624058] [<ffffffff8170cc32>] SyS_recvmsg+0x12/0x20
[ 1709.624058] [<ffffffff81960dcd>] system_call_fastpath+0x16/0x1b
[ 1709.624058] Code: 00 00 00 ff ff ff ff 01 00 00 00 00 01 10 00 00 00 ad de 00 02 20 00 00 00 ad de 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 ff ff 02 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 1709.798299] RIP [<ffffea001dbef3c0>] 0xffffea001dbef3c0
[ 1709.798299] RSP <ffff8805f935bbc0>
[ 1709.798299] CR2: ffffea001dbef3c0
[ 1709.798299] ---[ end trace 2e069ceceed3d61a ]---

It's so far only been noticed once. I don't know if it is the same issue, it certainly doesn't always happen when this problem occurs,
but it looks curious all the same...

Steven Schlansker

unread,
Sep 25, 2015, 5:55:43 PM9/25/15
to Guenter Roeck, Herbert Xu, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org

On Sep 25, 2015, at 2:37 PM, Steven Schlansker <stevensc...@gmail.com> wrote:

>
> On Sep 24, 2015, at 10:34 PM, Guenter Roeck <li...@roeck-us.net> wrote:
>
>> Herbert,
>>
>> On 09/24/2015 09:58 PM, Herbert Xu wrote:
>>> On Thu, Sep 24, 2015 at 09:36:53PM -0700, Guenter Roeck wrote:
>>>>
>>>> http://comments.gmane.org/gmane.linux.network/363085
>>>>
>>>> might explain your problem.
>>>>
>>>> I thought this was resolved in 4.1, but it looks like the problem still persists
>>>> there. At least I have reports from my workplace that 4.1.6 and 4.1.7 are still
>>>> affected. I don't know if there have been any relevant changes in 4.2.
>>>>
>>>> Copying Herbert and Eric for additional input.
>>>
>>> There was a separate bug discovered by Tejun recently. You need
>>> to apply the patches
>>>
>>> https://patchwork.ozlabs.org/patch/519245/
>>> https://patchwork.ozlabs.org/patch/520824/
>>>
>> I assume this is on top of mainline ?
>>
>>> There is another follow-up but it shouldn't make any difference
>>> in practice.
>>>
>>
>> Any idea what may be needed for 4.1 ?
>> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
>> but I have no idea if that will help with the problem we are seeing there.
>
> Thank you for the patches to try, I'll build a kernel with them early next week
> and report back. It sounds like it may not match my problem exactly so we'll
> see.
Huh, when it rains, it pours... now I have a legit panic too!

[ 1675.228701] BUG: unable to handle kernel paging request at fffffffffffffe70
[ 1675.232058] IP: [<ffffffff8175dcea>] netlink_compare+0xa/0x30
[ 1675.232058] PGD 2015067 PUD 2017067 PMD 0
[ 1675.232058] Oops: 0000 [#1] SMP
[ 1675.232058] Modules linked in: i2c_piix4(E) btrfs(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) aesni_intel(E) aes_x86_64(E) lrw(E) gf128mul(E) glue_helper(E) ablk_helper(E) cryptd(E) floppy(E)
[ 1675.232058] CPU: 2 PID: 11152 Comm: pf_dump Tainted: G E 4.0.4 #1
[ 1675.232058] Hardware name: Xen HVM domU, BIOS 4.2.amazon 05/06/2015
[ 1675.232058] task: ffff880150fa6480 ti: ffff880150fb4000 task.ti: ffff880150fb4000
[ 1675.232058] RIP: 0010:[<ffffffff8175dcea>] [<ffffffff8175dcea>] netlink_compare+0xa/0x30
[ 1675.232058] RSP: 0018:ffff880150fb7d10 EFLAGS: 00010246
[ 1675.232058] RAX: 0000000000000000 RBX: 00000000023e503b RCX: 000000000561f992
[ 1675.232058] RDX: 00000000fffc27e4 RSI: ffff880150fb7db8 RDI: fffffffffffffbb8
[ 1675.232058] RBP: ffff880150fb7d58 R08: ffff8805a82f5ab8 R09: 000000000000000c
[ 1675.232058] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
[ 1675.232058] R13: ffffffff8175dce0 R14: ffff88008b37e800 R15: ffff88076db40000
[ 1675.232058] FS: 00007feec2440700(0000) GS:ffff88078fc40000(0000) knlGS:0000000000000000
[ 1675.232058] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1675.232058] CR2: fffffffffffffe70 CR3: 000000053bd17000 CR4: 00000000001407e0
[ 1675.232058] Stack:
[ 1675.232058] ffffffff81434dae ffff88076d864400 ffff880150fb7db8 ffff8801559ee8b8
[ 1675.232058] ffff88076db40000 ffff8805a82f5c48 ffff88008b37e800 ffff88076d864400
[ 1675.232058] 0000000000000000 ffff880150fb7da8 ffffffff81435476 ffff880150fb7db8
[ 1675.232058] Call Trace:
[ 1675.232058] [<ffffffff81434dae>] ? rhashtable_lookup_compare+0x5e/0xb0
[ 1675.232058] [<ffffffff81435476>] rhashtable_lookup_compare_insert+0x66/0xc0
[ 1675.232058] [<ffffffff8175eb63>] netlink_insert+0x83/0xe0
[ 1675.232058] [<ffffffff8175f11d>] netlink_autobind.isra.34+0xad/0xd0
[ 1675.232058] [<ffffffff817614b1>] netlink_bind+0x1b1/0x240
[ 1675.232058] [<ffffffff8170b8b8>] SYSC_bind+0xb8/0xf0
[ 1675.232058] [<ffffffff81110784>] ? __audit_syscall_entry+0xb4/0x110
[ 1675.232058] [<ffffffff81022e2c>] ? do_audit_syscall_entry+0x6c/0x70
[ 1675.232058] [<ffffffff81024553>] ? syscall_trace_enter_phase1+0x123/0x180
[ 1675.232058] [<ffffffff810248b6>] ? syscall_trace_leave+0xc6/0x120
[ 1675.232058] [<ffffffff811f5a35>] ? fd_install+0x25/0x30
[ 1675.232058] [<ffffffff8170c5de>] SyS_bind+0xe/0x10
[ 1675.232058] [<ffffffff81960dcd>] system_call_fastpath+0x16/0x1b
[ 1675.232058] Code: 00 8b 77 08 39 77 14 8d 4e 01 41 0f 44 c9 41 39 c8 89 4f 08 74 09 48 8b 08 83 3c 11 04 74 e2 5d c3 0f 1f 44 00 00 31 c0 8b 56 08 <39> 97 b8 02 00 00 55 48 89 e5 74 0a 5d c3 0f 1f 84 00 00 00 00
[ 1675.232058] RIP [<ffffffff8175dcea>] netlink_compare+0xa/0x30
[ 1675.232058] RSP <ffff880150fb7d10>
[ 1675.232058] CR2: fffffffffffffe70
[ 1675.232058] ---[ end trace 963ff50a058120d0 ]---
[ 1675.232058] Kernel panic - not syncing: Fatal exception in interrupt
[ 1675.232058] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

Guenter Roeck

unread,
Sep 25, 2015, 10:58:59 PM9/25/15
to Steven Schlansker, Herbert Xu, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
For 4.0.x, you _really_ need to update to 4.0.9 to get the following two patches.

cf8befcc1a55 netlink: Disable insertions/removals during rehash
18889a4315a5 netlink: Reset portid after netlink_insert failure

Guenter

Guenter Roeck

unread,
Sep 25, 2015, 11:46:11 PM9/25/15
to Herbert Xu, Steven Schlansker, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
Herbert,

On 09/25/2015 08:55 AM, Herbert Xu wrote:
> On Thu, Sep 24, 2015 at 10:34:10PM -0700, Guenter Roeck wrote:
>>
>> Any idea what may be needed for 4.1 ?
>> I am currently trying https://patchwork.ozlabs.org/patch/473041/,
>
> This patch should not make any difference on 4.1 and later because
> 4.1 is where I rewrote rhashtable resizing and it should work (or
> if it is broken then the latest kernel should be broken too).
>
>> but I have no idea if that will help with the problem we are seeing there.
>
> Having looked at your message agin I don't think the issue I
> alluded to is relevant since the symptom there ought to be a
> straight kernel lock-up as opposed to just a user-space one because
> you will end up with the kernel sending a message to itself.
>
> And the fact that 4.2 works is more indicative as the bug is
> present in both 4.1 and 4.2.
>
> I'll try to reproduce this in 4.1 as time permits but no promises.
>

After applying your two patches, I don't see the problem in 4.1 anymore.
We'll run the system through regression; the complete cycle may take
a couple of weeks. I'll let you know if we find any further problems.

If you submit additional patches in that area, it would be great if you
can Cc: me.

Thanks,
Guenter

Steven Schlansker

unread,
Oct 5, 2015, 7:26:14 PM10/5/15
to Guenter Roeck, Herbert Xu, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org

On Sep 25, 2015, at 7:58 PM, Guenter Roeck <li...@roeck-us.net> wrote:

> On 09/25/2015 02:37 PM, Steven Schlansker wrote:
>>
>>
>> Thank you for the patches to try, I'll build a kernel with them early next week
>> and report back. It sounds like it may not match my problem exactly so we'll
>> see.
>>
>
> For 4.0.x, you _really_ need to update to 4.0.9 to get the following two patches.
>
> cf8befcc1a55 netlink: Disable insertions/removals during rehash
> 18889a4315a5 netlink: Reset portid after netlink_insert failure

Hi Guenter,

Thank you very much for the information. We upgraded to 4.0.9 and all indications are that
the issue is gone. I will follow up if that is not the case.

Thank you everyone for your guidance.

Guenter Roeck

unread,
Oct 5, 2015, 7:30:35 PM10/5/15
to Steven Schlansker, Herbert Xu, linux-...@vger.kernel.org, Eric Dumazet, net...@vger.kernel.org
On 10/05/2015 04:26 PM, Steven Schlansker wrote:
>
> On Sep 25, 2015, at 7:58 PM, Guenter Roeck <li...@roeck-us.net> wrote:
>
>> On 09/25/2015 02:37 PM, Steven Schlansker wrote:
>>>
>>>
>>> Thank you for the patches to try, I'll build a kernel with them early next week
>>> and report back. It sounds like it may not match my problem exactly so we'll
>>> see.
>>>
>>
>> For 4.0.x, you _really_ need to update to 4.0.9 to get the following two patches.
>>
>> cf8befcc1a55 netlink: Disable insertions/removals during rehash
>> 18889a4315a5 netlink: Reset portid after netlink_insert failure
>
> Hi Guenter,
>
> Thank you very much for the information. We upgraded to 4.0.9 and all indications are that
> the issue is gone. I will follow up if that is not the case.
>
> Thank you everyone for your guidance.
>

My pleasure.

Guenter
0 new messages