Dear SIG Build team,
we are self compiling TensorFlow on various HPC clusters due to hardware
requirements (e.g. CUDA drivers) and were using `--config=mkl` to
(supposedly) enable the use of MKL and/or oneDNN to accelerate the CPU
operations for various DNN ops.
However we are notified that our self-built package performs worse than
the pip package on CPU even though we enable more aggressive
optimizations, e.g. -march=native to make use of AVX2 etc.
Further investigation revealed a serious overusage of threads leading to
many involuntary context switches severely impacting the performance.
Those can be (mostly) mitigated by setting e.g. OMP_NUM_THREADS=1, but
we can't do that by default for all users of our cluster for obvious
Comparing our build with the official pip packages lead to the mentioned
mkl-option which is a collective setting for these flags:
Searching the binaries of the pip package for the effects of those flags
makes me conclude that neither of those is used, i.e. the official pip
packages are not build with `--config=mkl`. See
for a detailed analysis.
However disabling (i.e. not passing) --config=mkl makes it fail at least
1 Test: //tensorflow/core/kernels/mkl:mkl_fused_batch_norm_op_test
Only disabling the omp part, i.e. passing `--define=build_with_mkl=true
--define=tensorflow_mkldnn_contraction_kernel=0` instead makes many
Using the related `--config=mkl_threadpool` seems to be even worse with
NaNs, segfaults, FPEs....
- So what exactly is the purpose of `build_with_mkl` and `enable_mkl`?
- How are those flags exactly related to oneDNN and MKL? I don't see the
actual MKL being using, hence the confusion.
- How are the official pip packages built? Are they tested with that