Old things can be cute too! I mean cute in the sense of "excessively clever". Regardless of whether the tradition is venerable or not, I think my argument holds.
Moreover, as there often is, there's an alignment between the interests of the programmer and the compiler since both are trying to interpret code. Consider that if we did this, not only would the programmer need to keep in mind that A[:,k] might slice 1 column out of A or slice n-1 columns out of A — the compiler would too. When I say that the compiler needs to keep this in mind, I mean that it would have to emit code for every single indexing operation that checks the sign of k and does very different things depending on that sign. That's more overhead on every indexing operation, which is bad, but far worse is that it completely destroys the ability to cleanly infer the type of A[:,k] — whether the result is 1-d or 2-d depends on the *sign* of k, which the type system knows nothing about. Unless k is unsigned, in which case this is a much bigger disaster because -0x1 is 0xffffffffffffffff.
This is exactly the kind of feature that prevents languages like R and Python from being successfully type-inferenced. That's not a dig on their design — this kind of feature is perfectly reasonable in a language that's meant to be interpreted — but it's exactly the kind of thing that makes it impossible to just "rub some type inference" on existing dynamic languages and make them fast.