I think you can only really rely on IDs having different values.
In general, at the moment with Twitter, you could assume they increase over time, but (and I don't work for Twitter) typically ID allocation on large multihost systems don't work by allocating strictly sequential IDs without gaps - it's too hard to sequence and not really necessary.
So, for example, one way is that you build a system that gives different ID-assigning-hosts small blocks of IDs that they can use so they can allocate a series of IDs knowing they're unique without having to take out any kind of global lock (they only take the lock to ask for a new block every now and then). Another approach might be to have clocks synchronised to some known accuracy and have IDs calculated as "period-since-epoch * some-suitable-multiplier + unique-offset-per-host + incrementing-counter-for-this-host".
I'm sure people can come up with other schemes as quick as we could type them up, but in general you make your ID space many orders of magnitude bigger than you strictly need, and in return you gain some flexibility in the criteria needed for quick and cheap unique allocation in a distributed system. But I wouldn't assume that every possible ID value is necessarily allocated.