Looking at the code, if you're building for the correct architecture,
gcc will define __SSE2__ for you and that'll make sure SSE2 support is
built in. There also appears to be run-time detection of CPU features
to decide whether to use it or not.
Cheers,
--
Arun Raghavan
http://arunraghavan.net/
(Ford_Prefect | Gentoo) & (arunsr | GNOME)
The gyp flag disable_sse2 only has an effect on Linux, and then only
when targeting a 32-bit architecture (since x86-64 processors always
have SSE2 support). The target architecture will match your host
architecture by default. If you're building on a 64-bit machine, you
can use the following to make disable_sse2 do something:
$ ./build/gyp_chromium --depth=. -Dtarget_arch=ia32 -Ddisable_sse2=1 webrtc.gyp
> But still the CPU usage is same as before.
> Can you please help me to know whether sse2 is actually disabled or not?
If you want to quickly ensure SSE2 is disabled, comment out "#define
WEBRTC_USE_SSE2" in src/typedefs.h and rebuild.
There's a better method in
src/modules/audio_processing/main/test/process_test/process_test.cc
that essentially disables the run-time detection. Look under the
"--noasm" flag.
On the audio side, only AEC uses SSE optimization. To see any
difference then, ensure you have AEC enabled in voe_cmd_test. (I see
you mentioned this, but just wanted to make it clear for others).