Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

small changes in aesxcbcmac.c

3 views
Skip to first unread message

Alexander Nasonov

unread,
Sep 25, 2016, 6:04:22 PM9/25/16
to tech...@netbsd.org
The first change shrinks aes_xcbc_mac_init by 183 bytes on amd64
(from 562 to 379 bytes).
The second change avoids a comparison with an address that may
point beyond the end of a buffer.
The third change is stylistic.
Alex
aesxcbcmac.c

Rhialto

unread,
Sep 25, 2016, 6:29:15 PM9/25/16
to Alexander Nasonov, tech...@netbsd.org
On Sun 25 Sep 2016 at 22:01:16 +0100, Alexander Nasonov wrote:
> - while (addr + AES_BLOCKSIZE < ep) {
> + while (ep - addr > AES_BLOCKSIZE) {

I think that if ep points beyond tbe buffer (apart from the
just-past-the-end location), the subtraction is just as undefined
behaviour as before...

-Olaf.
--
___ Olaf 'Rhialto' Seibert -- Wayland: Those who don't understand X
\X/ rhialto/at/xs4all.nl -- are condemned to reinvent it. Poorly.
signature.asc

Eric Haszlakiewicz

unread,
Sep 25, 2016, 6:39:31 PM9/25/16
to Alexander Nasonov, tech...@netbsd.org
On September 25, 2016 5:01:16 PM EDT, Alexander Nasonov <al...@yandex.ru> wrote:
>The first change shrinks aes_xcbc_mac_init by 183 bytes on amd64
>(from 562 to 379 bytes).

Do you mean it shrinks its stack usage? Or does that change to static const vars somehow shrink the function itself?

Eric


Alexander Nasonov

unread,
Sep 26, 2016, 2:09:44 AM9/26/16
to Eric Haszlakiewicz, tech...@netbsd.org
gcc copies each byte to the stack:

ffffffff80532a10: c6 45 98 01 movb $0x1,-0x68(%rbp)
ffffffff80532a14: c6 45 99 01 movb $0x1,-0x67(%rbp)
ffffffff80532a18: c6 45 9a 01 movb $0x1,-0x66(%rbp)
ffffffff80532a1c: c6 45 9b 01 movb $0x1,-0x65(%rbp)
ffffffff80532a20: c6 45 9c 01 movb $0x1,-0x64(%rbp)
ffffffff80532a24: c6 45 9d 01 movb $0x1,-0x63(%rbp)
ffffffff80532a28: c6 45 9e 01 movb $0x1,-0x62(%rbp)
ffffffff80532a2c: c6 45 9f 01 movb $0x1,-0x61(%rbp)
ffffffff80532a30: c6 45 a0 01 movb $0x1,-0x60(%rbp)
ffffffff80532a34: c6 45 a1 01 movb $0x1,-0x5f(%rbp)
ffffffff80532a38: c6 45 a2 01 movb $0x1,-0x5e(%rbp)
ffffffff80532a3c: c6 45 a3 01 movb $0x1,-0x5d(%rbp)
ffffffff80532a40: c6 45 a4 01 movb $0x1,-0x5c(%rbp)
ffffffff80532a44: c6 45 a5 01 movb $0x1,-0x5b(%rbp)
ffffffff80532a48: c6 45 a6 01 movb $0x1,-0x5a(%rbp)
ffffffff80532a4c: c6 45 a7 01 movb $0x1,-0x59(%rbp)
ffffffff80532a50: c6 45 a8 02 movb $0x2,-0x58(%rbp)
ffffffff80532a54: c6 45 a9 02 movb $0x2,-0x57(%rbp)
ffffffff80532a58: c6 45 aa 02 movb $0x2,-0x56(%rbp)

and so on.

Alex

Joerg Sonnenberger

unread,
Sep 26, 2016, 3:52:40 AM9/26/16
to tech...@netbsd.org
Both, the variable is put into the rodata of the kernel and the function
shrinks by not having to copy things.

Joerg

Alexander Nasonov

unread,
Sep 26, 2016, 4:04:29 PM9/26/16
to tech...@netbsd.org
If there are no objections I'll commit the code.

PS I noticed some excessive memory copying (often of fixed-size blocks).
Some of them may be needed to prevent side channel attacks by measuring
execution time of cache misses. Data of the stack is more likely to be
in cache but it's not bulletproof. If we rely on this at all, buffers on
the stack should have __cacheline_aligned attribute but I don't see any
in the code.

> aes_xcbc_mac_result(u_int8_t *addr, void *vctx)
> {
> - u_char digest[AES_BLOCKSIZE];
> + u_int8_t digest[AES_BLOCKSIZE];
> aesxcbc_ctx *ctx;
> int i;

This buffer isn't actually needed. The destination addr can be passed
directly to rijndaelEncrypt() calls inside the function. I didn't
change it because it is the only array in the function and removing it
would disable ssp.

Alex

Alexander Nasonov

unread,
Sep 26, 2016, 4:23:34 PM9/26/16
to tech...@netbsd.org
Alexander Nasonov wrote:
> If there are no objections I'll commit the code.

Ah, Christos committed it already. Less work for me then :)

--
Alex
0 new messages