views into tables

31 views
Skip to first unread message

Daniel Oberhoff

unread,
Nov 28, 2014, 8:18:46 PM11/28/14
to nt2...@googlegroups.com
Hello again,

Well, here is my next:

is it possible with nt2 to have "views" into tables? I.e. in my case I will have a 3d table, and i want to scan it in the sense a convolution does, but with a step size possibly different from one, and stepping only in two of the dimensions. I need this both for input and output, i.e. for input I might get away with copying, but for output when assigning to the view I want to write into the given part of the large table. Is that possible and if so how?

Best

Daniel

Daniel Oberhoff

unread,
Nov 29, 2014, 7:57:04 AM11/29/14
to nt2...@googlegroups.com
ok, what seems to work is something like this:

auto z(x(_(1,1,3),_(1,1,3)));

then when I assign to z it changes the region of x as I had hoped. Just can't seem to find the right type to give z, so I have to make all functions on z templates.

Also can I change the basing? It seems the default is 1-indexing...

Joel Falcou

unread,
Nov 29, 2014, 8:27:34 AM11/29/14
to nt2...@googlegroups.com
z is complex type which is actually an expressiont empalte represnting the subview. There's effort done to have a simpler view<table<T>> type for those.
Meanwhiel auto + template si the correct way to do it.

As for indexing

table<float, C_index>

gives you C style indexing. It's somehow tested but if you found a spot where it fails, ping us up.



--
You received this message because you are subscribed to the Google Groups "nt2-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nt2-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Daniel Oberhoff

unread,
Nov 29, 2014, 1:02:32 PM11/29/14
to nt2...@googlegroups.com
Hello,

Yes, thanks for the answers. The only thing I am worried about ist that expressions over these views, being strided, even though they are contiguous in the inner dimension will not benefit from fast evaluation via simd, is that right? And if so is there a a way to work around this?

Von meinem iPhone gesendet

Joel Falcou

unread,
Nov 29, 2014, 1:12:20 PM11/29/14
to nt2...@googlegroups.com
IIRC we optimize case like a(_,n) where _ alone means all elements of this dimension. I *think* we also SIMD over _(1,n) which is equivqlent to _(1,1,n).

Mathias Gaunard

unread,
Nov 29, 2014, 1:13:52 PM11/29/14
to nt2...@googlegroups.com
On 29/11/2014 19:02, Daniel Oberhoff wrote:
> Hello,
>
> Yes, thanks for the answers. The only thing I am worried about ist that
> expressions over these views, being strided, even though they are
> contiguous in the inner dimension will not benefit from fast evaluation
> via simd, is that right? And if so is there a a way to work around this?

if you do z + z or something like that, the + will still be done using SIMD.
It's just the load from memory which won't be properly vectorized.

Mathias Gaunard

unread,
Nov 29, 2014, 1:26:36 PM11/29/14
to nt2...@googlegroups.com
On 29/11/2014 19:12, Joel Falcou wrote:
> IIRC we optimize case like a(_,n) where _ alone means all elements of
> this dimension. I *think* we also SIMD over _(1,n) which is equivqlent
> to _(1,1,n).

We optimize the case where it can statically be deduced that the data
being extracted is fully contiguous.

Basically, it means using '_' on all first arguments, with the
possibility of using _(a, b) once, followed exclusively by scalars.
So x(_, _(a, b), c) is optimized
x(_(a, b), _, c) isn't

We do not treat _(1,1,n) and _(1,n) as being equivalent since there is
no static way to tell that they're the same.

There are plans however to introduce better support for strides.
The idea would be to be able to fully optimize things such as
x(_(a0,c0), _(a1,b1,c1), ...)
It would however only get vectorized on the innermost non-singleton
dimension.

Daniel Oberhoff

unread,
Nov 29, 2014, 5:30:22 PM11/29/14
to nt2...@googlegroups.com
So you are saying if my expression has _(a,b) on the leading dimension I still get simd optimization? That'd be great and enough for my case :)

Von meinem iPhone gesendet

Mathias Gaunard

unread,
Dec 1, 2014, 4:22:00 AM12/1/14
to nt2...@googlegroups.com
It's the opposite: you can only have it on the outer dimension ATM.

Daniel Oberhoff

unread,
Dec 1, 2014, 4:23:43 AM12/1/14
to nt2...@googlegroups.com
Which one is contiguous? You do fortran order, i.e. the first one is? But using _(a, b) there keeps it contiguous, right? I mean the stride is hard-coed 1…


Daniel Oberhoff
daniel....@gmail.com

Mathias Gaunard

unread,
Dec 1, 2014, 4:56:43 AM12/1/14
to nt2...@googlegroups.com
x(_(a, b)) is contiguous
x(_(a, b), _(c, d)) is not

On 01/12/14 10:23, Daniel Oberhoff wrote:
> Which one is contiguous? You do fortran order, i.e. the first one is? But using _(a, b) there keeps it contiguous, right? I mean the stride is hard-coed 1…
>
>

Daniel Oberhoff

unread,
Dec 1, 2014, 4:57:43 AM12/1/14
to nt2...@googlegroups.com
Ah, you mean completely, well x(_(a, b), _(c, d)) is contiguous in one of the dimensions, which, if it is large enough, is enough for direct simd loads…

---
Daniel Oberhoff
daniel....@gmail.com
Reply all
Reply to author
Forward
0 new messages