How to compute 2-norm distance between two 3-d tables

400 views
Skip to first unread message

Tien Nguyen

unread,
May 18, 2016, 4:02:19 AM5/18/16
to torch7
Hi torchers, 

I am trying to compute 2-norm distance between two 3-d tables, for example 64 x 300 x 1200, along the first dimension. My expected output should be 1 x 300 x 1200. 

However, I can not use the nn.PairwiseDistance or torch.dist because they require vector or matrix inputs. Plus, if I do a nested loop along the second and third dimension, the computation time will be high. 

Do you have any idea for this problem? Thank you.

Vislab

unread,
May 18, 2016, 6:00:39 AM5/18/16
to torch7
You want to compute the L2-norm of two 3D tensors or of two tables with N entries of 3D tensors? I'm confused

Tien Nguyen

unread,
May 18, 2016, 7:11:37 AM5/18/16
to torch7
@Vislab yes, two 3D tensors.
I just recently learned to code lua, I am thinking about make a loop along the first dimension, subtracting the two matrix of 300 x 1200, squaring them. Then loop again to sum them up and make a square root. Therefore, I can use some CUDA functions for torch.




Vislab

unread,
May 18, 2016, 10:24:14 AM5/18/16
to torch7
for two 3D tensors I think this should suffice:
res = torch.sqrt((tensorA - tensorB):pow(2):sum())

Tien Nguyen

unread,
May 20, 2016, 7:47:11 AM5/20/16
to torch7
Thank you @Vislab

Message has been deleted

Tushar N

unread,
May 20, 2016, 9:16:39 AM5/20/16
to torch7
Vislab's solution produces a single number (L2 norm of the flattened matrix)
If you want it along a specific dimension, use torch.norm.

torch.norm(tensorA-tensorB, 2, 1) --L2 norm over dimension 1

This will take two 64 x 300 x 1200 tensors and output a 1 x 300 x 1200 tensor. 

On Friday, 20 May 2016 17:17:11 UTC+5:30, Tien Nguyen wrote:
Thank you @Vislab

Message has been deleted

Tien Nguyen

unread,
May 20, 2016, 10:37:46 PM5/20/16
to torch7
@Tushar N Thank you for your suggestion, I change the code to  compute the sum along the first dimension and it works. I will look at the torch.norm

res = torch.sqrt((tensorA - tensorB):pow(2):sum(1)

Tien Nguyen

unread,
May 20, 2016, 10:49:50 PM5/20/16
to torch7
Your code run faster, thank you !

Rui Shu

unread,
May 21, 2016, 12:26:07 AM5/21/16
to torch7
If you're doing this iteratively, it might be valuable to consider the cost of memory allocation for each operation (See first paragraph https://github.com/torch/torch7/blob/master/doc/maths.md)
I'd consider the following:

tensorA = torch.randn(64, 300, 1200)
tensorB = torch.randn(64, 300, 1200)

-- allocate memory beforehand
buffer = torch.Tensor(64, 300, 1200)
answer = torch.Tensor(1, 300, 1200)

-- perform computation (however many times you need)
buffer:csub(tensorA, tensorB):pow(2)
answer:sum(buffer, 1):sqrt()

This is especially true if you switch over to GPU-computation (doing memory allocation on-the-fly on GPU can cause noticeable overhead. https://github.com/torch/cunn). 

Tien Nguyen

unread,
May 21, 2016, 2:17:14 AM5/21/16
to torch7
You're right, my program executes iteratively. I tried to pre-allocate the memory and it runs 20-time faster. Thank you !

Rui Shu

unread,
May 21, 2016, 2:20:44 AM5/21/16
to torch7
Pre-allocation for the win! Just make sure that, if you're saving all the results, you copy it over to another tensor. Otherwise you're simply overwriting the same memory buffer each iteration.
Reply all
Reply to author
Forward
0 new messages