WARNING in handle_mm_fault

101 views
Skip to first unread message

Dmitry Vyukov

unread,
Nov 24, 2015, 8:50:47 AM11/24/15
to Johannes Weiner, Michal Hocko, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen
Hello,

I am hitting the following WARNING on commit
8005c49d9aea74d382f474ce11afbbc7d7130bec (Nov 15):


------------[ cut here ]------------
WARNING: CPU: 3 PID: 12661 at include/linux/memcontrol.h:412
handle_mm_fault+0x17ec/0x3530()
Modules linked in:
CPU: 3 PID: 12661 Comm: executor Tainted: G B W 4.4.0-rc1+ #81
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
00000000ffffffff ffff88003725fc80 ffffffff825d3336 0000000000000000
ffff880061d95900 ffffffff84cfb6c0 ffff88003725fcc0 ffffffff81247889
ffffffff815b68fc ffffffff84cfb6c0 000000000000019c 0000000002f68038
Call Trace:
[<ffffffff81247ab9>] warn_slowpath_null+0x29/0x30 kernel/panic.c:411
[<ffffffff815b68fc>] handle_mm_fault+0x17ec/0x3530 mm/memory.c:3440
[< inline >] access_error arch/x86/mm/fault.c:1020
[<ffffffff81220951>] __do_page_fault+0x361/0x8b0 arch/x86/mm/fault.c:1227
[< inline >] trace_page_fault_kernel
./arch/x86/include/asm/trace/exceptions.h:44
[< inline >] trace_page_fault_entries arch/x86/mm/fault.c:1314
[<ffffffff81220f5a>] trace_do_page_fault+0x8a/0x230 arch/x86/mm/fault.c:1330
[<ffffffff81213f14>] do_async_page_fault+0x14/0x70
[<ffffffff84bf2b98>] async_page_fault+0x28/0x30
---[ end trace 179dec89fcb66e7f ]---


Reproduction instructions are somewhat involved. I can provide
detailed instructions if necessary. But maybe we can debug it without
the reproducer. Just in case I've left some traces here:
https://gist.githubusercontent.com/dvyukov/451019c8fb14aa4565a4/raw/4f6d55c19fbec74c5923a1aa62acf1db81fe4e98/gistfile1.txt


As a blind guess, I've added the following BUG into copy_process:

diff --git a/kernel/fork.c b/kernel/fork.c
index b4dc490..c5667e8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1620,6 +1620,8 @@ static struct task_struct *copy_process(unsigned
long clone_flags,
trace_task_newtask(p, clone_flags);
uprobe_copy_process(p, clone_flags);

+ BUG_ON(p->memcg_may_oom);
+
return p;


And it fired:

------------[ cut here ]------------
kernel BUG at kernel/fork.c:1623!
invalid opcode: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 3 PID: 28384 Comm: executor Not tainted 4.4.0-rc1+ #83
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff880034c542c0 ti: ffff880033140000 task.ti: ffff880033140000
RIP: 0010:[<ffffffff81242df3>] [<ffffffff81242df3>] copy_process+0x32e3/0x5bf0
RSP: 0018:ffff880033147c28 EFLAGS: 00010246
RAX: ffff880034c542c0 RBX: ffff880033148000 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880060ca9a14
RBP: ffff880033147e08 R08: ffff880060ca9808 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000001 R12: ffff88006269b148
R13: 00000000003d0f00 R14: 1ffff10006628fa8 R15: ffff880060ca9640
FS: 0000000002017880(0063) GS:ffff88006dd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fc73b14ae78 CR3: 000000000089a000 CR4: 00000000000006e0
Stack:
ffffea00017e1780 1ffff10006628f8b ffff880033147c48 ffffffff81338b22
ffff880034c54a58 ffffffff816509d0 0000000000000246 ffffffff00000001
ffff880060ca99b8 00007fc73b14ae78 ffffea00017e1780 00000002624ab4d0
Call Trace:
[<ffffffff81245afd>] _do_fork+0x14d/0xb40 kernel/fork.c:1729
[< inline >] SYSC_clone kernel/fork.c:1838
[<ffffffff812465c7>] SyS_clone+0x37/0x50 kernel/fork.c:1832
[<ffffffff84bf0c76>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185
Code: 03 0f b6 04 02 48 89 fa 83 e2 07 38 d0 7f 09 84 c0 74 05 e8 c0
e4 3d 00 41 f6 87 d4 03 00 00 20 0f 84 d7 ce ff ff e8 ed 70 21 00 <0f>
0b e8 e6 70 21 00 48 8b 1d 8f 39 cf 04 49 bc 00 00 00 00 00
RIP [<ffffffff81242df3>] copy_process+0x32e3/0x5bf0
kernel/fork.c:1623 (discriminator 1)
RSP <ffff880033147c28>
---[ end trace 6b4b09a815461606 ]---


So it seems that copy_process creates tasks with memcg_may_oom flag
set, which looks wrong. Can it be the root cause?


Thank you

Johannes Weiner

unread,
Nov 24, 2015, 5:31:32 PM11/24/15
to Dmitry Vyukov, Michal Hocko, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen
Hi Dmitry,

On Tue, Nov 24, 2015 at 02:50:26PM +0100, Dmitry Vyukov wrote:
> As a blind guess, I've added the following BUG into copy_process:
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index b4dc490..c5667e8 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1620,6 +1620,8 @@ static struct task_struct *copy_process(unsigned
> long clone_flags,
> trace_task_newtask(p, clone_flags);
> uprobe_copy_process(p, clone_flags);
>
> + BUG_ON(p->memcg_may_oom);
> +
> return p;

Thanks for your report.

I don't see how this could happen through the legitimate setters of
p->memcg_may_oom. Something must clobber it. What happens with the
following patch applied?

diff --git a/include/linux/sched.h b/include/linux/sched.h
index edad7a4..42e1285 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1463,9 +1463,11 @@ struct task_struct {
unsigned sched_reset_on_fork:1;
unsigned sched_contributes_to_load:1;
unsigned sched_migrated:1;
+ unsigned dummy_a:1;
#ifdef CONFIG_MEMCG
unsigned memcg_may_oom:1;
#endif
+ unsigned dummy_b:1;
#ifdef CONFIG_MEMCG_KMEM
unsigned memcg_kmem_skip_account:1;
#endif
diff --git a/kernel/fork.c b/kernel/fork.c
index f97f2c4..ab6f7ba 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1617,6 +1617,12 @@ static struct task_struct *copy_process(unsigned long clone_flags,
trace_task_newtask(p, clone_flags);
uprobe_copy_process(p, clone_flags);

+ if (p->dummy_a || p->dummy_b || p->memcg_may_oom) {
+ printk(KERN_ALERT "dummy_a:%d dummy_b:%d memcg_may_oom:%d\n",
+ p->dummy_a, p->dummy_b, p->memcg_may_oom);
+ BUG();
+ }
+
return p;

bad_fork_cancel_cgroup:

Michal Hocko

unread,
Nov 25, 2015, 3:44:05 AM11/25/15
to Dmitry Vyukov, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
[CCing Tejun and Peter]
Sasha has reported the same thing some time ago
http://www.spinics.net/lists/cgroups/msg14075.html. Tejun had a theory
http://www.spinics.net/lists/cgroups/msg14078.html but we never got down
to the solution.
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majo...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"do...@kvack.org"> em...@kvack.org </a>

--
Michal Hocko
SUSE Labs

Tetsuo Handa

unread,
Nov 25, 2015, 5:51:27 AM11/25/15
to Michal Hocko, Dmitry Vyukov, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
On 2015/11/25 17:44, Michal Hocko wrote:
> Sasha has reported the same thing some time ago
> http://www.spinics.net/lists/cgroups/msg14075.html. Tejun had a theory
> http://www.spinics.net/lists/cgroups/msg14078.html but we never got down
> to the solution.

Did you check assembly code?
https://gcc.gnu.org/ml/gcc/2012-02/msg00005.html

Dmitry Vyukov

unread,
Nov 25, 2015, 8:06:19 AM11/25/15
to Johannes Weiner, Michal Hocko, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen
I cannot reproduce the condition again, either with your patch or with
mine patch... Will try harder.

Dmitry Vyukov

unread,
Nov 25, 2015, 8:09:01 AM11/25/15
to Tetsuo Handa, Michal Hocko, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
If the race described in
http://www.spinics.net/lists/cgroups/msg14078.html does actually
happen, then there is nothing to check.
https://gcc.gnu.org/ml/gcc/2012-02/msg00005.html talks about different
memory locations, if there is store-widening involving different
memory locations, then this is a compiler bug. But the race happens on
a single memory location, in such case the code is buggy.

Dmitry Vyukov

unread,
Nov 25, 2015, 8:12:46 AM11/25/15
to Johannes Weiner, Michal Hocko, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen
I mean that I still can reproduce the original warning, but I can't
catch memcg_may_oom=1 in copy_process.

Tetsuo Handa

unread,
Nov 25, 2015, 10:27:23 AM11/25/15
to dvy...@google.com, mho...@kernel.org, han...@cmpxchg.org, cgr...@vger.kernel.org, linu...@kvack.org, ak...@linux-foundation.org, syzk...@googlegroups.com, k...@google.com, gli...@google.com, sasha...@oracle.com, edum...@google.com, gth...@google.com, t...@kernel.org, pet...@infradead.org
Dmitry Vyukov wrote:
> If the race described in
> http://www.spinics.net/lists/cgroups/msg14078.html does actually
> happen, then there is nothing to check.
> https://gcc.gnu.org/ml/gcc/2012-02/msg00005.html talks about different
> memory locations, if there is store-widening involving different
> memory locations, then this is a compiler bug. But the race happens on
> a single memory location, in such case the code is buggy.
>

All ->in_execve ->in_iowait ->sched_reset_on_fork ->sched_contributes_to_load
->sched_migrated ->memcg_may_oom ->memcg_kmem_skip_account ->brk_randomized
shares the same byte.

sched_fork(p) modifies p->sched_reset_on_fork but p is not yet visible.
__sched_setscheduler(p) modifies p->sched_reset_on_fork.
try_to_wake_up(p) modifies p->sched_contributes_to_load.
perf_event_task_migrate(p) modifies p->sched_migrated.

Trying to reproduce this problem with

static __always_inline bool
perf_sw_migrate_enabled(void)
{
- if (static_key_false(&perf_swevent_enabled[PERF_COUNT_SW_CPU_MIGRATIONS]))
- return true;
return false;
}

would help testing ->sched_migrated case.

Dmitry Vyukov

unread,
Nov 25, 2015, 12:21:23 PM11/25/15
to Tetsuo Handa, Michal Hocko, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
I have some progress.

With the following patch:

dvyukov@dvyukov-z840:~/src/linux-dvyukov$ git diff
include/linux/sched.h mm/memory.c
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2fae7d8..4c126a1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1455,6 +1455,8 @@ struct task_struct {
/* Used for emulating ABI behavior of previous Linux versions */
unsigned int personality;

+ union {
+ struct {
unsigned in_execve:1; /* Tell the LSMs that the process is doing an
* execve */
unsigned in_iowait:1;
@@ -1463,18 +1465,24 @@ struct task_struct {
unsigned sched_reset_on_fork:1;
unsigned sched_contributes_to_load:1;
unsigned sched_migrated:1;
+ unsigned dummy_a:1;
#ifdef CONFIG_MEMCG
unsigned memcg_may_oom:1;
#endif
+ unsigned dummy_b:1;
#ifdef CONFIG_MEMCG_KMEM
unsigned memcg_kmem_skip_account:1;
#endif
#ifdef CONFIG_COMPAT_BRK
unsigned brk_randomized:1;
#endif
+ };
+ unsigned nonatomic_flags;
+ };

unsigned long atomic_flags; /* Flags needing atomic access. */

+
struct restart_block restart_block;

pid_t pid;
diff --git a/mm/memory.c b/mm/memory.c
index deb679c..6351dac 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -62,6 +62,7 @@
#include <linux/dma-debug.h>
#include <linux/debugfs.h>
#include <linux/userfaultfd_k.h>
+#include <linux/kasan.h>

#include <asm/io.h>
#include <asm/pgalloc.h>
@@ -3436,12 +3437,45 @@ int handle_mm_fault(struct mm_struct *mm,
struct vm_area_struct *vma,
* Enable the memcg OOM handling for faults triggered in user
* space. Kernel faults are handled more gracefully.
*/
- if (flags & FAULT_FLAG_USER)
+ if (flags & FAULT_FLAG_USER) {
+ volatile int x;
+ unsigned f0, f1;
+ f0 = READ_ONCE(current->nonatomic_flags);
+ for (x = 0; x < 1000; x++) {
+ WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeae);
+ cpu_relax();
+ WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeab);
+ cpu_relax();
+ f1 = READ_ONCE(current->nonatomic_flags);
+ if (f1 != 0xaeaeaeab) {
+ pr_err("enable: flags 0x%x -> 0x%x\n", f0, f1);
+ break;
+ }
+ }
+ WRITE_ONCE(current->nonatomic_flags, f0);
+
mem_cgroup_oom_enable();
+ }

ret = __handle_mm_fault(mm, vma, address, flags);

if (flags & FAULT_FLAG_USER) {
+ volatile int x;
+ unsigned f0, f1;
+ f0 = READ_ONCE(current->nonatomic_flags);
+ for (x = 0; x < 1000; x++) {
+ WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeae);
+ cpu_relax();
+ WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeab);
+ cpu_relax();
+ f1 = READ_ONCE(current->nonatomic_flags);
+ if (f1 != 0xaeaeaeab) {
+ pr_err("enable: flags 0x%x -> 0x%x\n", f0, f1);
+ break;
+ }
+ }
+ WRITE_ONCE(current->nonatomic_flags, f0);
+
mem_cgroup_oom_disable();
/*
* The task may have entered a memcg OOM situation but


I see:

[ 153.484152] enable: flags 0x8 -> 0xaeaeaeaf
[ 168.707786] enable: flags 0x8 -> 0xaeaeaeae
[ 169.654966] enable: flags 0x40 -> 0xaeaeaeae
[ 176.809080] enable: flags 0x48 -> 0xaeaeaeaa
[ 177.496219] enable: flags 0x8 -> 0xaeaeaeaf
[ 193.266703] enable: flags 0x0 -> 0xaeaeaeae
[ 199.536435] enable: flags 0x8 -> 0xaeaeaeae
[ 210.650809] enable: flags 0x48 -> 0xaeaeaeaf
[ 210.869397] enable: flags 0x8 -> 0xaeaeaeaf
[ 216.150804] enable: flags 0x8 -> 0xaeaeaeaa
[ 231.607211] enable: flags 0x8 -> 0xaeaeaeaf
[ 260.677408] enable: flags 0x48 -> 0xaeaeaeae
[ 272.065364] enable: flags 0x40 -> 0xaeaeaeaf
[ 281.594973] enable: flags 0x48 -> 0xaeaeaeaf
[ 282.899860] enable: flags 0x8 -> 0xaeaeaeaf
[ 286.472173] enable: flags 0x8 -> 0xaeaeaeae
[ 286.763203] enable: flags 0x8 -> 0xaeaeaeaf
[ 288.229107] enable: flags 0x0 -> 0xaeaeaeaf
[ 291.336522] enable: flags 0x8 -> 0xaeaeaeae
[ 310.082981] enable: flags 0x48 -> 0xaeaeaeaf
[ 313.798935] enable: flags 0x8 -> 0xaeaeaeaf
[ 343.340508] enable: flags 0x8 -> 0xaeaeaeaf
[ 344.170635] enable: flags 0x48 -> 0xaeaeaeaf
[ 357.568555] enable: flags 0x8 -> 0xaeaeaeaf
[ 359.158179] enable: flags 0x48 -> 0xaeaeaeaf
[ 361.188300] enable: flags 0x40 -> 0xaeaeaeaa
[ 365.636639] enable: flags 0x8 -> 0xaeaeaeaf

Dmitry Vyukov

unread,
Nov 25, 2015, 12:32:35 PM11/25/15
to Tetsuo Handa, Michal Hocko, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
With a better check:

volatile int x;
unsigned f0, f1;
f0 = READ_ONCE(current->nonatomic_flags);
for (x = 0; x < 1000; x++) {
WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeff);
cpu_relax();
WRITE_ONCE(current->nonatomic_flags, 0xaeaeae00);
cpu_relax();
f1 = READ_ONCE(current->nonatomic_flags);
if (f1 != 0xaeaeae00) {
pr_err("enable1: flags 0x%x -> 0x%x\n", f0, f1);
break;
}

WRITE_ONCE(current->nonatomic_flags, 0xaeaeae00);
cpu_relax();
WRITE_ONCE(current->nonatomic_flags, 0xaeaeaeff);
cpu_relax();
f1 = READ_ONCE(current->nonatomic_flags);
if (f1 != 0xaeaeaeff) {
pr_err("enable2: flags 0x%x -> 0x%x\n", f0, f1);
break;
}
}
WRITE_ONCE(current->nonatomic_flags, f0);


I see:

[ 82.339662] enable1: flags 0x8 -> 0xaeaeae04
[ 102.743386] enable1: flags 0x0 -> 0xaeaeaeff
[ 122.209687] enable2: flags 0x0 -> 0xaeaeae04
[ 142.366938] enable1: flags 0x8 -> 0xaeaeae04
[ 157.273155] diable2: flags 0x40 -> 0xaeaeaefb
[ 162.320346] enable2: flags 0x8 -> 0xaeaeae00
[ 163.241090] enable2: flags 0x0 -> 0xaeaeaefb
[ 194.266300] diable2: flags 0x40 -> 0xaeaeaefb
[ 196.247483] enable1: flags 0x8 -> 0xaeaeae04
[ 219.996095] enable2: flags 0x0 -> 0xaeaeaefb
[ 228.088207] diable1: flags 0x48 -> 0xaeaeae04
[ 228.802678] diable2: flags 0x40 -> 0xaeaeaefb
[ 241.829173] enable1: flags 0x8 -> 0xaeaeae04
[ 257.601127] diable2: flags 0x48 -> 0xaeaeae04
[ 265.207038] enable2: flags 0x8 -> 0xaeaeaefb
[ 269.887365] enable1: flags 0x0 -> 0xaeaeae04
[ 272.254086] diable1: flags 0x40 -> 0xaeaeae04
[ 272.480384] enable1: flags 0x8 -> 0xaeaeae04
[ 276.430762] enable2: flags 0x8 -> 0xaeaeaefb
[ 289.526677] enable1: flags 0x8 -> 0xaeaeae04


Which suggests that somebody messes with 3-rd bit (both sets and
resets). Assuming that compiler does not reorder fields, this bit is
sched_reset_on_fork.

Michal Hocko

unread,
Nov 25, 2015, 12:37:34 PM11/25/15
to Dmitry Vyukov, Tetsuo Handa, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, syzkaller, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
On Wed 25-11-15 18:21:02, Dmitry Vyukov wrote:
[...]
> I have some progress.

Please have a look at Peter's patch posted in the original email thread
http://lkml.kernel.org/r/20151125150...@twins.programming.kicks-ass.net

Dmitry Vyukov

unread,
Nov 25, 2015, 12:44:21 PM11/25/15
to syzkaller, Tetsuo Handa, Johannes Weiner, cgr...@vger.kernel.org, linu...@kvack.org, Andrew Morton, Kostya Serebryany, Alexander Potapenko, Sasha Levin, Eric Dumazet, Greg Thelen, Tejun Heo, Peter Zijlstra
On Wed, Nov 25, 2015 at 6:37 PM, Michal Hocko <mho...@kernel.org> wrote:
> On Wed 25-11-15 18:21:02, Dmitry Vyukov wrote:
> [...]
>> I have some progress.
>
> Please have a look at Peter's patch posted in the original email thread
> http://lkml.kernel.org/r/20151125150...@twins.programming.kicks-ass.net

Yes, I've posted there as well. That patch should help.

Tetsuo Handa

unread,
Nov 26, 2015, 6:33:14 AM11/26/15
to dvy...@google.com, syzk...@googlegroups.com, han...@cmpxchg.org, cgr...@vger.kernel.org, linu...@kvack.org, ak...@linux-foundation.org, k...@google.com, gli...@google.com, sasha...@oracle.com, edum...@google.com, gth...@google.com, t...@kernel.org, pet...@infradead.org
OK. This bug seems to exist since commit ca94c442535a "sched: Introduce
SCHED_RESET_ON_FORK scheduling policy flag". Should

Cc: <sta...@vger.kernel.org> [2.6.32+]

line be added?

By the way, does use of "unsigned char" than "unsigned" save some bytes?
Simply trying not to change the size of "struct task_struct"...
According to C99, only "unsigned int", "signed int" and "_Bool" are
allowed. But many compilers accept other types such as "unsigned char",
given that we watch out for compiler bugs.

Peter Zijlstra

unread,
Nov 30, 2015, 11:20:07 AM11/30/15
to Tetsuo Handa, dvy...@google.com, syzk...@googlegroups.com, han...@cmpxchg.org, cgr...@vger.kernel.org, linu...@kvack.org, ak...@linux-foundation.org, k...@google.com, gli...@google.com, sasha...@oracle.com, edum...@google.com, gth...@google.com, t...@kernel.org
On Thu, Nov 26, 2015 at 08:33:05PM +0900, Tetsuo Handa wrote:

> By the way, does use of "unsigned char" than "unsigned" save some bytes?

There are architectures that cannot do independent byte writes. Best
leave it a machine word unless there's a real pressing reason otherwise.

Reply all
Reply to author
Forward
0 new messages