I'm certainly not the expert among us. Nevertheless, I would like to offer a different view on why rounding down might be natural when using a float as an array index.
Assume we are in a language with zero-based indexing and think of a one-dimensional array as being a ruler with a natural-number graduation ("0", "1", "2", ...) that marks the distance from the edge of the ruler (with "0" being right at the edge). An array cell is the space between two consecutive graduations, with cell
n lying between graduations
n and
n + 1. Using a real number (in practice, a float) to specify a point on the ruler, we land in some cell. A natural way to convert from a real to a natural number (rather, from float to an int (perhaps even an uint?)) is therefore to round down, because that way, the cell pointed to is an invariant.
The ruler analogy is less clear when it comes to one-based indexing. I suppose you could say rounding
up would be the corresponding conversion convention.
Pondering the issue more, one might even question using floats at all for pointing to an array cell. The precision needed for hitting cell
n is the same for large
n as it is for small, therefore, a fixed-precision type might be more appropriate. An analogous issue was discussed regarding Ardour (a very nice, free digital audio workstation). In Ardour, points on the time axis are currently represented by a floating point number, even though time coordinates towards the end of the song really have the same need for precision as do times near the beginning. It works in practice, but it's perhaps not the nicest way to do it.