scsi: memory leak in sg_start_req

44 views
Skip to first unread message

Dmitry Vyukov

unread,
Jan 9, 2018, 11:05:47 AM1/9/18
to Doug Gilbert, je...@linux.vnet.ibm.com, Martin K. Petersen, linux-scsi, LKML, Jens Axboe, linux...@vger.kernel.org, syzkaller
Hello,

syzkaller has found the following memory leak:

unreferenced object 0xffff88004c190000 (size 8328):
comm "syz-executor", pid 4627, jiffies 4294749150 (age 45.507s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
20 00 00 00 22 01 00 00 00 00 00 00 04 00 00 00 ..."...........
backtrace:
[<000000005955b5a9>] kmalloc_order+0x59/0x80 mm/slab_common.c:1124
[<0000000043ae006e>] kmalloc_order_trace+0x1f/0x160 mm/slab_common.c:1133
[<00000000d33b2e16>] kmalloc_large include/linux/slab.h:433 [inline]
[<00000000d33b2e16>] __kmalloc+0x2c4/0x340 mm/slub.c:3751
[<00000000e7430040>] kmalloc include/linux/slab.h:504 [inline]
[<00000000e7430040>] bio_alloc_bioset+0x4d5/0x7e0 block/bio.c:450
[<00000000f370e717>] bio_kmalloc include/linux/bio.h:410 [inline]
[<00000000f370e717>] bio_copy_user_iov+0x2be/0xcb0 block/bio.c:1226
[<000000001d0b79ed>] __blk_rq_map_user_iov block/blk-map.c:67 [inline]
[<000000001d0b79ed>] blk_rq_map_user_iov+0x2b6/0x7d0 block/blk-map.c:136
[<000000004200a869>] blk_rq_map_user+0x11e/0x170 block/blk-map.c:166
[<000000008f21739e>] sg_start_req drivers/scsi/sg.c:1794 [inline]
[<000000008f21739e>] sg_common_write.isra.16+0x14df/0x1ed0
drivers/scsi/sg.c:777
[<00000000093f61e3>] sg_write+0x8a7/0xd7b drivers/scsi/sg.c:677
[<00000000b67dafdc>] __vfs_write+0x10d/0x8f0 fs/read_write.c:480
[<000000000638f16f>] vfs_write+0x1fd/0x570 fs/read_write.c:544
[<000000006a7e6867>] SYSC_write fs/read_write.c:589 [inline]
[<000000006a7e6867>] SyS_write+0xfa/0x250 fs/read_write.c:581

can be reproduced with the following program:

// autogenerated by syzkaller (http://github.com/google/syzkaller)
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

int main()
{
int fd = open("/dev/sg1", O_RDWR);
const char *data =
"\xb6\x3d\xb8\x5e\x1e\x8d\x22\x00\x00\x00\x00\x00\x00\x08\xaf\xd6\x1d"
"\xcc\x43\x6a\xed\x5e\xd2\xbc\x70\x18\xce\xbc\x9b\x97\xae\x21\x91\x4d"
"\x87\x2c\x67\x8c\xe2\x2c\x9b\x16\x0e\x96\xaa\x1f\xae\x1a";
write(fd, data, 0x30);
return 0;
}

if executed in a loop, memory consumption grows infinitely.

on upstream commit b2cd1df66037e7c4697c7e40496bf7e4a5e16a2d

Douglas Gilbert

unread,
Jan 11, 2018, 1:04:09 AM1/11/18
to Dmitry Vyukov, je...@linux.vnet.ibm.com, Martin K. Petersen, linux-scsi, LKML, Jens Axboe, linux...@vger.kernel.org, syzkaller
The seemingly random data that program is sending is asking for
a buffer of 2,264,314 bytes which the sg driver procures and waits
for the caller to either issue a read() or close() the file
or shutdown the program. The test program does none of those
expected operations, it simply asks for the same resources again.

In my version of your test code (attached), that happens 1,021 times
at which point the file handles in that process are exhausted and
all subsequent open()s fail with EBADF (as do the write()s). The
output from my program was this on one run:

# ./sg_syzk_grow
First errno=9 [Bad file descriptor] index=1021
done_count=50000, err_count=48979, last_errno=9 [Bad file descriptor]

# lsscsi -gs
[0:0:0:0] disk Linux scsi_debug 0186 /dev/sda /dev/sg0 2.14GB

Monitoring that program with 'free' from another terminal I see
about 2.5 GBytes of ram "swallowed" almost immediately when the test
program runs. When the program exits (about 50 seconds later) as far
as I can see all that ram is given back.


If you used the same program and wrote to a regular file rather than
a sg device, then that program would eventually fill any file system,
at the rate of 48 bytes per iteration (given enough file descriptor
resources). The sg driver, using its original 1994 interface,
deprecated for around 18 years, just gets a system to resource
exhaustion quicker.

Doug Gilbert




sg_syzk_grow.c

Dmitry Vyukov

unread,
Jan 11, 2018, 3:24:14 AM1/11/18
to Doug Gilbert, je...@linux.vnet.ibm.com, Martin K. Petersen, linux-scsi, LKML, Jens Axboe, linux...@vger.kernel.org, syzkaller
But the process exits which implicitly closes all files. There is
nothing in user-space to hold onto these resources. I would understand
if the same process would open infinite amount of files and never
close old ones (like in your test). But here it seems that kernel must
free up all associated resources. What am I missing?

Bart Van Assche

unread,
Jan 11, 2018, 11:28:33 AM1/11/18
to syzk...@googlegroups.com, linux-...@vger.kernel.org, linux...@vger.kernel.org, dgil...@interlog.com, martin....@oracle.com, dvy...@google.com, ax...@kernel.dk, linux...@vger.kernel.org, je...@linux.vnet.ibm.com
On Thu, 2018-01-11 at 01:04 -0500, Douglas Gilbert wrote:
> Monitoring that program with 'free' from another terminal I see
> about 2.5 GBytes of ram "swallowed" almost immediately when the test
> program runs. When the program exits (about 50 seconds later) as far
> as I can see all that ram is given back.

Hello Doug,

There is probably something that's leaking memory in one of the functions in
the reported call stack. kmemleak reports the following in my tests (I haven't
had the time yet to analyze this further):

unreferenced object 0xffff880363c09088 (size 192):
comm "multipath", pid 20088, jiffies 4295034706 (age 6544.980s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 20 00 00 00 12 01 00 00 00 00 00 00 .... ...........
backtrace:
[<ffffffff81659e1a>] kmemleak_alloc+0x4a/0xa0
[<ffffffff811f51fe>] __kmalloc+0x16e/0x320
[<ffffffff812e4848>] bio_alloc_bioset+0x198/0x1f0
[<ffffffff812e51b9>] bio_map_user_iov+0x129/0x3f0
[<ffffffff812f5863>] blk_rq_map_user_iov+0x123/0x210
[<ffffffff812f5a02>] blk_rq_map_user+0x52/0x60
[<ffffffff8130b7b5>] sg_io+0x385/0x3c0
[<ffffffff8130bad3>] scsi_cmd_ioctl+0x2e3/0x450
[<ffffffff8130bc82>] scsi_cmd_blk_ioctl+0x42/0x50
[<ffffffff8148d835>] sd_ioctl+0x85/0x110
[<ffffffff81302bfb>] blkdev_ioctl+0x4db/0x970
[<ffffffff812660fd>] block_ioctl+0x3d/0x50
[<ffffffff812348d4>] do_vfs_ioctl+0x94/0x670
[<ffffffff81234ef1>] SyS_ioctl+0x41/0x70
[<ffffffff81001e5c>] do_syscall_64+0x5c/0x110
[<ffffffff81667564>] return_from_SYSCALL_64+0x0/0x7a

Bart.
Reply all
Reply to author
Forward
0 new messages