_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
We are executing these library tests on a baremetal target, so for us it is not possible to copy over batches of files (we can't even use SCP and SSH). At the moment we use a simple python script as our %{exec} command that loads the cross-compiled binary into the targets memory, where it is then executed.
Having the tests compiled in batch and then executed one by one would work for us however. In general, as long as our use case continues to work without any major changes to our test infrastructure, I'm all for trying to improve the testing performance of the libraries.
Cheers,
Dominik
We've been looking at some related questions and have some other options that you might be interested in. First of all a comment: The idea of 'deferred execution' seems problematic since it fundamentally changes the semantics of the report from lit. I'm curious how you are able to use this in practice to, for instance, check that tests were eventually run somewhere?
We have an embedded environment where building LLVM itself is barely feasible due to resource constraints. As a result we cross-compile LLVM and generate an installed tarball copied to the embedded system. However, this means that we can't test the cross-compiled LLVM executables until we get to the embedded system.Our approach has been to factor the test directory into a hierarchical cmake project. This project then uses the standard LLVM cmake export mechanisms (i.e. find_project) to find LLVM. This refactoring has no effect on a regular in-tree toplevel build. However, we can checkout the LLVM tree on the embedded system and build *just the test area* using the installed tarball of LLVM. I think this refactoring of cmake is something that would be relatively easy to carry out on the LLVM tree. Relative to your current approach, this moves the problem of tarballing and remote code execution out of lit's responsibility and into a more devops/release responsibility, which makes more sense to me.
Perhaps you also have other goals, such as partitioning tests to run on multiple target nodes? I haven't thought too much about how this would interact.
Separately, we also have the problem of tests that need to behave differently in different contexts. e.g.:RUN: clang --target=my_cross_target ... -o test.elfRUN: %run% test.elfIn this case, we'd like to be able to test the compilation part outside of the target, but when we run the same test on the target machine, we can compile and run. In this case we do something similar (as you see above) using a lit subsitution that varies depending on the cmake environment. Doing this is somewhat clumsy and I've thought it would be nicer to move this into lit, allowing the test to be:RUN: clang --target=my_cross_target ... -o test.elfRUN_ON_TARGET: %run% test.elfIn this case the behavior of RUN*: lines would be configurable in the lit.cfg.py. This could implement part of your current use case (although maybe there would be impacts on how the reporting is done?)
I also want to point out there are different ways of remote execution and those might require slightly different model. It would be good to have all the cases considered when we are discussing the correct approach to take. Two examples I have is:* Runtime tests for libcxx/libcxxabi/compiler-rt/libunwind. Those are different from normal llvm/clang lit tests because it can have cross compile target. When building for host, the tests works because RUN line can describe both compile time command and runtime command but for cross compile target, you can't really run the tests because you need a way to run the compile on the host and runtime command on the device. You can't just bundle everything to send to device since I might not have a toolchain to be used on device to compile. One other option is using gtest. When I was looking at libunwind, I almost wanted to rewrite the test suite into gtest (because it is very small while libcxx tests are too large to be rewritten) so I can simple build the test, install into device and drive it on the device side with lit test.
* The other way of remote execution is for distributed build/test. If you have a distribute build system, the bottleneck of the build/test is definitely running the testsuite. In this case, we might be thinking of execute the RUN line for compile remotely. Bundle everything up for this works but it will be hard to distribute to a pool of nodes without huge overhead. I know this is different from the problem we try to solve here but it is interesting to think if we want to remodel how lit works.
I also want to point out there are different ways of remote execution and those might require slightly different model. It would be good to have all the cases considered when we are discussing the correct approach to take. Two examples I have is:* Runtime tests for libcxx/libcxxabi/compiler-rt/libunwind. Those are different from normal llvm/clang lit tests because it can have cross compile target. When building for host, the tests works because RUN line can describe both compile time command and runtime command but for cross compile target, you can't really run the tests because you need a way to run the compile on the host and runtime command on the device. You can't just bundle everything to send to device since I might not have a toolchain to be used on device to compile. One other option is using gtest. When I was looking at libunwind, I almost wanted to rewrite the test suite into gtest (because it is very small while libcxx tests are too large to be rewritten) so I can simple build the test, install into device and drive it on the device side with lit test.
* The other way of remote execution is for distributed build/test. If you have a distribute build system, the bottleneck of the build/test is definitely running the testsuite. In this case, we might be thinking of execute the RUN line for compile remotely. Bundle everything up for this works but it will be hard to distribute to a pool of nodes without huge overhead. I know this is different from the problem we try to solve here but it is interesting to think if we want to remodel how lit works.