Why are array dimensions and indicies Int and not Uint?

135 views
Skip to first unread message

Austin Lund

unread,
May 28, 2014, 9:45:35 PM5/28/14
to julia...@googlegroups.com
There is code for checking if array dimensions and indices are negative.  See for example arrayref and arraysize in the c code in src/array.c.

Surely these values can never be negative and much of this code could be avoided if the dimensions of an array and the array indicies are known to be only ever positive, i.e. they are defined as Uint.  In fact, I think the native size_t is unsigned.

The overhead of this checking is low, so this is clearly a very minor issue.

Stefan Karpinski

unread,
May 29, 2014, 2:55:01 AM5/29/14
to Julia Users
Surely these values can never be negative

Well, they shouldn't be negative, but who among us hasn't ever made a programming error that ended up computing the wrong value for an index – a negative index or one that's too large.

You'll note that C uses size_t for indexing, but also converts values between integer types automatically, which is convenient if you didn't make any mistakes, but it can be dangerous. Consider this program:

#include <stdio.h>

void printit(size_t n) {
printf("n = %lu\n", n);
}

int main(void) {
long n = -1;
printf("n = %ld\n", n);
printit(n);
return 0;
}

It prints this on a 64-bit system:

n = -1
n = 18446744073709551615

In Julia, we tend to use Int everywhere because it's often annoying to work with unsigned integers and on 64-bit systems at least, Int is plenty big for all indexing needs, despite being signed. As a result, you need to check for negative indexes.
Reply all
Reply to author
Forward
0 new messages