Yes the Tegra 2 has a Cortex-A9 with VFPv3-D16 (a very fast hardware floating-point unit) but it does not have NEON (an even faster integer & float SIMD unit), So if you enable VFP then it should be much faster at floating-point calculations than Cortex-A8 based devices (eg: iPhone4 or BeagleBoard-xM). These are things that mostly effect your C/C++ compiler, so make sure you use the correct VFP settings for your compiler (eg: for GCC it is "-mfpu=vfpv3-16" and perhaps more, like Tom said).
The Tegra optimizations are not enabled by default even on Android, and are mostly NEON optimizations (plus some Tegra GPU and Tegra multi-core optimizations), so you probably won't gain much by using it compared to Tegra 3 devices that do have NEON.
-Shervin.
On Monday, June 4, 2012 4:52:14 PM UTC-7, tomwhipple wrote:
Hi,
I know you're not building for Android, but as an example, if you look at the android.toolchain.cmake, the "armeabi-v7a" target runs on Tegra2.
Hope that helps!
-tom
On Sunday, June 3, 2012 8:38:16 AM UTC-7, Nicu Stiurca wrote:
Hi,
I recently posted a question on the main OpenCV Yahoo Users Group. Someone replied saying I might have better luck with my question here. I'm not sure this is the right place for it, but it's worth a try so I am pasting my original question below.
I am wondering about Tegra support for non-Android embedded applications. I am developing some CV applications on a Tegra 2 board running Angstrom Linux. (Angstrom is a fairly popular embedded Linux distribution.) I noticed in the OpenCV 2.4.0beta changelog that apparently the OpenCV Tegra team has achieved significant performance optimizations. Does anybody know if these optimizations are enabled by default in the 2.4 tag of the OpenCV repository? Are they Android specific, or can I take advantage of them simply by cross-compiling the source for my platform? Do I have to enable any special flags?