Compiling kaldi with MKL and TBB

570 views
Skip to first unread message

Ange Castro

unread,
Aug 27, 2015, 9:16:28 AM8/27/15
to kaldi-help
Hi everyone,

After tweaking the configure file a bit, I could compile kaldi using the latest MKL version 11.3 (released on August the 25th) which includes TBB (Thread Building Block) support. Previously I have successfully compiled using the previous MKL version and OpenMP but disabling the thread-math flag. The new MKL tools no longer include libiomp5 which gave me some trouble until I found the libmkl_tbb_thread replacement. Anyways so far it has been working great I just wanted to ask if someone has gotten the sam warning I got while compiling and if this should be something I should be concerned:

g++ -m64 -msse -msse2 -pthread -Wall -I.. -DKALDI_DOUBLEPRECISION=0 -DHAVE_POSIX_MEMALIGN -Wno-sign-compare -Wno-unused-local-typedefs -Winit-self -DHAVE_EXECINFO_H=1 -rdynamic -DHAVE_CXXABI_H -DHAVE_MKL -I/opt/intel/mkl/include -I/opt/kaldi/tools/openfst/include -Wno-sign-compare -g  -DHAVE_CUDA -I/usr/local/cuda-7.0/include  -DHAVE_SPEEX -I/opt/kaldi/src/../tools/speex/include   -c -o sgmm-calc-distances.o sgmm-calc-distances.cc
In file included from nnet3-latgen-faster.cc:28:0:
../nnet3/nnet-am-decodable-simple.h:83:55: warning: converting to non-pointer type 'kaldi::int32 {aka int}' from NULL [-Wconversion-null]
                         int32 online_ivector_period = NULL);

Furthermore, if someone has come across some issues while using the MKL library on kaldi, I would be very glad to hear about it.

Cheers,
Angel
                                                       ^

Jan Trmal

unread,
Aug 27, 2015, 9:31:32 AM8/27/15
to kaldi-help
Ad the warning -- it was already fixed a couple of days back. So you can just update kaldi and recompile. 

Ad compilation -- as a general rule, you can use this https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor to figure out the correct combination of switches. Thanks for the heads up about change, I'll try to modify  the makefile to take this into account.  The main obstacle with MKL is a correct detection of the version -- just because the linking command line changes a lot from version to version.

Ad using MKL  -- I'm not aware about any particular issue with MKL. To my experience (not only in Kaldi), it's performance is much more stable (i.e. harder to find instances of task size, dimensions) where the performance would be lower than expected. Be warned though -- Intel iswon't optimize for AMD cpus. They are completely upfront about it. And I've been told that the difference is visible. So if you are using AMD cpus, you might be better of using OpenBLAS (or ATLAS).

y.


--
You received this message because you are subscribed to the Google Groups "kaldi-help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kaldi-help+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ange Castro

unread,
Aug 27, 2015, 11:04:30 AM8/27/15
to kaldi-help
Hi Yenda,

Great I will update asap.

I was aware MKL was only optimised for Intel processors, luckily the cluster I work on has only Intel processors.

As you mentioned, I changed the configure file actually using this advisor link line, a little bit odd was the fact that when I used the link line using GNU option, which outputs flags as -no-as-needed, I encountered a bunch of errors so I switch to the Intel (R) C/C++ option, the only errors were that some -lmkl were not found so I kept the flag " -Wl,-rpath=$mkllibdir" from the original configure file and voila. It worked. Ah sure and I also added the options || check_library $OMPLIBDIR "libmkl_tbb_thread" "a" || check_library $OMPLIBDIR "libmkl_tbb_thread" "so" the same as in the linux_configure_omplibdir. That did the trick for me

I called the script thus:
./configure --mkl-root=/opt/intel/mkl --threaded-math=yes --use-cuda=yes --cudatk-dir=/usr/local/cuda-7.0 --omp-libdir=/opt/intel/mkl/lib/intel64

The --omp-libdir option might seem redundant but otherwise it defaults to the previous composer_xe edition which still uses libiomp5.

So far I have gotten considerable time improvements in nnet training of around 30-40% and ~60% while decoding compared to ATLAS but only 10-15% and 20% against the previous version which used openmp instead of TBB.

Cheers,
Angel

Jan Trmal

unread,
Aug 27, 2015, 11:28:01 AM8/27/15
to kaldi-help
Would you care about submitting a PR request with the modified configure script?
y.

Ange Castro

unread,
Aug 27, 2015, 2:21:08 PM8/27/15
to kaldi-help
PR request? I am sorry I don't know what do you mean by that

Jan Trmal

unread,
Aug 27, 2015, 2:23:55 PM8/27/15
to kaldi-help
That means pull-request. 
Thanks for the configure script you've sent me directly -- I will use that.
y.

Ange Castro

unread,
Feb 12, 2016, 8:08:56 AM2/12/16
to kaldi-help
Hi Yenda,

Just a quick update, the new MKL version no longer installs by default the tbb libraries needed for threaded math with TBB, additionally when configuring the libtbb libraries are now only found in the tbb folder not in mkl (e.g /opt/intel/tbb/lib/intel/gcc4.x) so the flag --omp-libdir needs to be used. 

Using the previous version I didn't find noticed substantial performance increase, I will test this version, otherwise, I might use iomp instead

Cheers,
Angel
Reply all
Reply to author
Forward
0 new messages