Jorgen Grahn wrote:
> On Tue, 2020-05-26, Pavel wrote:
> ...
>> Also, more library and/or hardware support is often available for
>> parallel arrays of basic types than is for a single array of
>> user-defined compound objects (e.g. think vector operations).
>
> Surely it makes no difference if you copy 100 int or 50 std::pair<int, int>?
> I'd expect a quality compiler to optimize them the same way.
>
> (Of course it's different if things like std::strings are involved.)
>
> /Jorgen
>
Sorry, has been off the group for a while. The point was not in unrolling 2-dim
array to 1-dim array (although it is also often beneficial in some apps like
matrix ops but it's besides the point). parallel vs pairs want make a different
for copy of course. The point was in that you have, say, those 50 std::pair<int,
int> -- which are logically 2-D vectors (x,y); and you want to e.g. find an
array of 50 lengths of these vector using, say, Intel vector instructions for
that (some SSE-something), you are better off if you had them in parallel arrays
of 50 xs and 50 ys to load to, say, XMM registers and index by a single index
register, say, EDI than storing them as an array of 50 pairs (for both
load/store streaming and indexing convenience). And, if you only need xs or only
ys for some op (say, translate your vectors only up or down or only left or
right), it's even more obvious.
-Pavel