Without actually trying it myself, I would say that the -O1 command line runs optimization passes where the -O0 command line does not. Thus, your “baseline” IR is already somewhat optimized in the -O1 case. If you want to see IR with no optimizations run at all, you want to add `-Xclang -disable-llvm-passes` to your command line for producing unoptimized IR. I think this would produce the same results for -O0 and -O1.
--paulr
Without actually trying it myself, I would say that the -O1 command line runs optimization passes where the -O0 command line does not. Thus, your “baseline” IR is already somewhat optimized in the -O1 case. If you want to see IR with no optimizations run at all, you want to add `-Xclang -disable-llvm-passes` to your command line for producing unoptimized IR. I think this would produce the same results for -O0 and -O1.
--paulr
From: llvm-dev <llvm-dev...@lists.llvm.org> On Behalf Of hameeza ahmed via llvm-dev
Sent: Wednesday, November 13, 2019 4:02 AM
To: llvm-dev <llvm...@lists.llvm.org>
Subject: [llvm-dev] Difference between clang -O1 -Xclang -disable-O0-optnone and clang -O0 -Xclang -disable-O0-optnone in LLVM 9
Hello,
I m trying to test individual O3 optimizations/ replicating O3 behavior on IR. I took unoptimized IR (O0), used disable-o0-optnone via (clang -O0 -Xclang -disable-O0-optnone). I read somewhere about clang -O1 -Xclang -disable-O0-optnone, so I also tested on this initial IR.
I have observed by using individual optimizations, the performance (i.e time) is better when the base/initial IR is generated via clang -O1 -Xclang -disable-O0-optnone. In case of clang -O0 -Xclang -disable-O0-optnone initial IR, the performance is reduced.
What is the possible reason for this?
What is the right way?
Please guide.
Thank You
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev