On Thu, Feb 07, 2013 at 02:36:25PM -0500, Oliver Ruebenacker wrote:
> Why is this faster? This needs to look at, and make a decision
> about, every element, unlike the other solutions.
So without studying the code in detail here is my guess (someone else
may follow up with a more detailed analysis):
Every time you want to add or remove parts of a vector, a new vector
has to be created. In the case where you're appending (or popping off
the end) this can be pretty fast (involving copying a 32-element
array 1-6 times and changing 6 entries).
However, in cases where you want to remove from the beginning or middle
of a Vector, or when you want to concatenate vectors, you end up having
to "shift" significant portions of the inner array(s) which is more
expensive.
So for instance, if you call drop(1) on a vector it will need to run
through the entire vector shifting everything over by 1, through the
nested tree of arrays. This isn't necessarily any faster than a map or
filter call. Similarly, when you concatenate vectors it will have to
copy the lhs vector (relatively fast) then shift the rhs
element-by-element into the copy. It might use an iterator or some
other method but it will be relatively expensive.
By contrast, a filter will just allocate a new empty (and mutable)
vector, and iterate through the existing vector copying (or not) the
values. You could certainly write a delete method that would be faster
than filter, but it seems reasonable to me that concatenating vectors
is unlikely to be faster than a filter or map.
-- Erik