[LLVMdev] Clang and i128

75 views
Skip to first unread message

Mario Schwalbe

unread,
Apr 24, 2012, 4:27:20 AM4/24/12
to llv...@cs.uiuc.edu
Hi all,

I currently use LLVM 3.0 clang to compile my source code to bitcode
(on a X86-64 machine) before it is later processed by a pass, like this:

$ clang -m32 -O3 -S foo.c -emit-llvm -o foo.ll

However, for some reason the the resulting module contains 128-bit
instructions, e.g.:

%6 = load i8* %arrayidx.1.i, align 1, !tbaa !0
%7 = zext i8 %6 to i128
%8 = shl nuw nsw i128 %7, 8

which the pass can't handle (and never will).

So my question is: Why does it do so? The code doesn't use integer types
larger than 32-bit. Is there an option to prevent clang from using
those types? If no, which pass might be responsible for this kind of
optimization?

Thanks in advance,
Mario
_______________________________________________
LLVM Developers mailing list
LLV...@cs.uiuc.edu http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

Rotem, Nadav

unread,
Apr 24, 2012, 5:04:21 AM4/24/12
to Mario Schwalbe, llv...@cs.uiuc.edu
Mario,

The ScalarReplAggregates pass attempts to converts structs into scalars to allow many optimizations. Try running this pass with a different threshold or try placing a breakpoint on ConvertScalar_ExtractValue and check if you can manually disable some of the transformations in SRA.

Nadav
---------------------------------------------------------------------
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
Reply all
Reply to author
Forward
0 new messages