_______________________________________________
LLVM Developers mailing list
LLV...@cs.uiuc.edu http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
On 10/ 3/13 07:42 PM, Kun Ling wrote:
Hi Jyoti,
The best benchmark is your application, and since Clang & LLVM have plenty of aggressive optimizations ( some of them may be bug-prone), it also depends on how do you want to improve the performance.
The following is some benchmarks that you could use to evaluate performance of clang.
1. Phoronix have done some performance test using its Phoronix Test Benchmarks (http://www.phoronix-test-suite.com/ ), it includes plenty of commonly used applications. The full list of applications in Phoronix benchmark could be found here: http://openbenchmarking.org/suites/pts
Hi LK
-1 :P Have you looked at their testsuite and how it's setup? It gives little regard for switching out and tracking the performance of compiler flag changes.
2. For industry standard performance comparison, SPEC CPU is also a good choice. You could find out more here: http://www.spec.org/cpu/ . General Purpose CPU vendors use it to show performance improvements.
Waaay over tuned...
I can't say with absolute certainty, but didn't these favor polyhedral type loop optimizations.
3. There are also some other small benchmarks that could test compiler performance, like polybench (http://www.cse.ohio-state.edu/~pouchet/software/polybench/ <http://www.cse.ohio-state.edu/%7Epouchet/software/polybench/> ), which focus on evaluating the loop transformation of the compiler.
---------------------
You have to decide what types of code you want to benchmark - HPC, C++, scalar/vectorized.. embedded.. etc
If you narrow done what sort of performance comparison - I can offer some suggestions. The above benchmarks aren't bad, but in some cases it won't be a fair comparison against clang/llvm. Other compilers may have done excessive tuning and it'll be reflective compared to a default clang/llvm
Just like NAS parallel benchmark probably has less direct tuning from Intel. If you're looking at embedded maybe Dhrystone....
Lastly - There's some benchmarks in the llvm testsuite to consider
I can't say these are very good choices, but they are probably easy to run
https://llvm.org/viewvc/llvm-project/test-suite/trunk/SingleSource/
2) For lag in execution time due to floating point operations, it was clearly observed that gcc used floating point instruction FSQRT, where as clang seemed to use emulated function (?) BL SQRT.
Note that we used the following flags for both clang as well as gcc compilation.
-march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16 -mtune=cortex-a8
Infact, i was surprised to see that even when " -march=armv7-a -mfloat-abi=hard -mfpu=vfpv3-d16 -mtune=cortex-a8"
was used, the code generated did not use hardware vsqrt instruction, instead there was a bl sqrt instruction.
Could someone point out why vsqrt was not emited in assembly even though softfp or 'hard' float-abi was specified ?
The vsqrt instruction may not be generated when automatically for platforms where math functions may set errno. Try compiling with -fno-math-errno and see if that helps.