Why does Android studio provide different values for sizeof(long) at runtime compared to compiletime or during preprocessing.

572 views
Skip to first unread message

Raj

unread,
Oct 26, 2018, 11:51:28 AM10/26/18
to android-ndk
I'm using Android Studio(64-Bit version) on Windows 7(64-Bit) & building native library using Android-NDK (with the help of Cmake). Below is the smallest breakdown of my problem.

I'm using the value of sizeof(long) as the size for one of my arrays in the native C code. Android studio is behaving differently at runtime compared to compiletime. I'm printing the value of sizeof(long) in a log statement & it prints 4. Now, for one of my requirement, I need to declare an array as

char c[4 - sizeof(long)];

I know this looks weird, but this is a breakdown of my bigger problem. Actually we are using the sizeof some structures for allocating the size of some arrays & the structure has some long variables which are causing issues & wrong sizes.

The above declaration is throwing an error saying 'Array length can't be negative' & when I change the value to 8, i.e., when I declare the array as 

char c[8 - sizeof(long)];

the error is gone, so the sizeof(long) is being populated to 8 by the Android Studio & is throwing a pre-processor error for values less than 8, and if I ignore the pre-processor error & go ahead with compilation, then the compiler throws up the same error. But the value of sizeof(long) in the debug print statement below is 4.

__android_log_print(ANDROID_LOG_VERBOSE, "log", "Sizeof(long): %d", sizeof(long));  // This prints Sizeof(long): 4

I have set the macro to build 32Bit native libraries in my CMakeLists.txt file, by using set(TARGET_PREFER_32_BIT 1).

Can anyone please help me out with this issue, Is there anything additional required to be done in Android Studio project setup to get the sizeof(long) to be 4 at pre-processor level? Is there anything else to be done for building 32-Bit native libraries? Any help will be very appreciated.

Dan Albert

unread,
Oct 26, 2018, 12:18:41 PM10/26/18
to andro...@googlegroups.com
I'm fairly certain `TARGET_PREFER_32_BIT` does nothing in this environment. Android Studio is invoking CMake with an explicit target. You need to use abiFilters if you want to restrict to 32-bit ABIs (but you shouldn't, 64-bit capable apps will be required soon, so you might as well get it out of the way).

--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.
To view this discussion on the web visit https://groups.google.com/d/msgid/android-ndk/24d5ab34-25f8-44c4-8fa2-f72f0adc726c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrew Esh

unread,
Oct 29, 2018, 3:11:21 AM10/29/18
to android-ndk
There appear to be a couple of misconceptions in your way here.

First, you start out by telling us the bit width of Android Studio and Windows. These do not matter, because Android Studio is going to size everything for the target environment, not in relation to the build environment. Android is always cross compiling, so it may help to study how cross compiling works.

Second, array sizes are not normally calculated the way you are doing it. The number in the brackets of an array declaration is the count of objects you want. The size of those objects is what the concept of "sizeof" applied to. You're using char arrays, so the size of each object happens to be one. Instead your array declaration should use the structure or other type you are making an array of, and then the number of them you want determines the number that goes in the brackets. Here is an example:

struct foo { int bar; long baz };
struct foo myarray[10];

This declares a structure "foo" which has an int and a long. Then it declares an array "myarray" of ten of structure "foo".

The reason you want to do this instead of using char arrays and calculating sizeofs it because in the code, the array references work as expected. Otherwise, there is confusion. Using a char array x, then x[2] in the code would select the third byte in the array. Which item in your structure has the third byte as part of it? How would you refer to a long somewhere in the structure? You would have to write a bunch of code to figure out where everything it in the array based on the sizes of the various structure components.

The good news is C already does all of the size calculation for you. Keep the size information in the type part of the array declaration on the left, and the count in the brackets on the right. Then the references not only gives you the reference you want, but compiling the code for different sized platforms gives the correct results. Here is how my array is referenced:

int x = foo[3].bar;
long y = foo[7].baz;
struct z = foo[5];

struct foo w;
w.bar = x;
w.baz = y;

This gave me values from the integer in fourth structure in my array, and the long from the eighth structure. Then I declared a structure of foo and set it to the value of the sixth value of the array. Then I declared yet another simple foo variable to be assigned the values taken from the array in the first two steps. 

All of this is done without using sizeof, and without counting bytes within the structure or the array. The compiler can do that for you, and the compiler does it differently for each platform that has different sized types. That latter feature is what makes C code portable between platforms, and is an important concept to understand.

If there are other requirements that are causing you to want to use a char array, perhaps you need to look into "type casting", or the use of a "union". 


kacper.k...@vestiacom.com

unread,
Oct 29, 2018, 10:35:27 AM10/29/18
to android-ndk
Taking all other answers into account (they explain a lot) I think your main confusion is caused by the fact that Android Studio is compiling for all architectures - 32bit and 64bit (on 64bit Windows). For 32bit architectures long is 4bytes long, for 64bit it's 8bytes long (at least on x86-64). So your code was not valid for ALL architectures.

When you run the application you can only see effect of code compiled for 32bit architecture.

You can solve this with the usage of uint8_t, uint16_t, uint32_t, uint64_t from <stdint.h>. Treat it as a LAST possibility of those mentioned here.

John Dallman

unread,
Oct 29, 2018, 10:43:36 AM10/29/18
to andro...@googlegroups.com
On Mon, Oct 29, 2018 at 2:35 PM <kacper.k...@vestiacom.com> wrote:
For 32bit architectures long is 4bytes long, for 64bit it's 8bytes long (at least on x86-64).

ARM 64-bit is the same. 

John

Raj Shekar

unread,
Oct 29, 2018, 1:50:09 PM10/29/18
to andro...@googlegroups.com
Thank you everyone for your kind responses and valuable details. I was facing the issue because my abiFilters were not set to any specific value. As you guys have stated the compilation was being done for both 32bit and 64bit and my native code was compiling correctly for 32bit where as throwing up the error that I had mentioned while compiling for 64bit.

I have set the abiFilters to compile for 32bit only and it solved the problem. Thanks all for your valuable time. 

Thanks, 
Raj. 

--
You received this message because you are subscribed to the Google Groups "android-ndk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-ndk...@googlegroups.com.
To post to this group, send email to andro...@googlegroups.com.
Visit this group at https://groups.google.com/group/android-ndk.

Raj Shekar

unread,
Oct 29, 2018, 2:20:01 PM10/29/18
to andro...@googlegroups.com
Sorry that I was under the impression that the macro to build only 32Bit native libraries is to use the set() macro in the CMakeLists.txt file, by using set(TARGET_PREFER_32_BIT 1). This is wrong and doesn't help.

Thanks,
Raj. 
Reply all
Reply to author
Forward
0 new messages