Sure thing! And thanks for taking a look at scikit-build.
In referencing Anaconda, I wanted to point out that when building your own module with external dependencies:
1 - You will need to decide how to link in those external dependencies.
2 - You will need to decide what requirements (if any) your package will demand from your environment so that its external dependencies are resolved correctly.
3 - Most importantly, points 1 and 2 are somewhat coupled. The choices you make regarding one will have implications for those regarding the other.
Anaconda is an example of one way you can make these choices, but there are others. Based on your requirements and trying to avoid extra requirements on your environment, one possible approach would be to build your extension module and *statically* link in your external dependencies (libboost_thread, in this case). Prepackaged libraries are not usually built for this, so you might have to compile libboost_thread, yourself. The idea is to compile with -fPIC, but link into a static .a archive, and link against that when building your extension module. Your module will have all its code (including whatever parts of libboost_thread that it uses) all contained in the one .so, so there wouldn't be any external symbols to resolve when python imports it -- no need to fiddle with LD_LIBRARY_PATH. Since all the code is in one object, virtualenv should "just work", and your code should happily run without problem as long as you keep to the same architecture/OS.
The above suggestion has some considerable limitations, but is also pretty constrained. More flexible options await if you're willing to relax some of these constraints, such as if you're willing to manage your LD_LIBRARY_PATH and/or relax your definition of "self-contained".
Regarding your specific questions:
1 - For the majority of cases, CMake provides facilities by which you can find installed libraries on your system. However, I hesitate to suggest copying external libraries directly into wheels. Even if that would work for you, you would still have to set your LD_LIBRARY_PATH before running any applications. I'm pretty sure this would work, but seems like a lot of hassle since you're sticking to one architecture/OS. If it were me, I would just leave the library out, entirely, and accept the fact that my module is going to load something outside of my virtualenv.
2 - This is actually a really interesting idea. There's no end to the potential for magic features (and shot feet) with the right imp module hackery. With the linux loader, however, you're pretty limited, since you're trying to dig into internals that are well beneath the level of the Python interpreter. I think the only thing you can really do is temporarily modify the environment of the running interpreter before the import, proper, and then (preferably) restore the environment after. I suspect your "dirty" hack is probably doing something similar?