How to normalize the rows of a 2D tensor (i.e. matrix)

1,958 views
Skip to first unread message

Ajay Talati

unread,
Mar 7, 2015, 9:05:53 PM3/7/15
to tor...@googlegroups.com
Hi,

I'm struggling with transferring code from numpy/R/Matlab over to torch.

How do you normalize the rows of a 2D tensor (i.e matrix) say m in torch?

Normally I'd do this,

m %*% diag(1/sum(m,2))

which would get the sums of each row to be one.




soumith

unread,
Mar 7, 2015, 9:41:55 PM3/7/15
to torch7 on behalf of Ajay Talati
destructively normalize matrix m in-place:

m:add(m:mean(2):mul(-1):expandAs(m))

I can give you other cases as well if you want.

--
You received this message because you are subscribed to the Google Groups "torch7" group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at http://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

Ajay Talati

unread,
Mar 7, 2015, 11:40:47 PM3/7/15
to tor...@googlegroups.com
OK I did it - but this is the SH*TT*EST piece of coding I've done this year - consolidation is it works - and it looks like the calculation I'm trying to reproduce :)))))

Pub lunch for anyone who can improve it


  -- generate kernel
  local gauss = torch.Tensor(N, C):zero()
  for k=1,N do
    for c=1,C do
        gauss[k][c] = math.exp(-(math.pow( (c-mu[k]), 2) )/(2*sigma_sqrd))
    end
  end
   
  -- normalize -- find a better way of doing this !!!!
   Z = torch.sum(gauss,2)
 
  one_over_Z = torch.cdiv(torch.ones(N) , Z)
 
  out = torch.Tensor(N, C):zero()
 
  for k=1,N do
     out[k] = torch.mul(gauss[k] , one_over_Z[k])
  end
 
  --check row sums
  -- torch.sum(out,2)

  return out




On Sunday, March 8, 2015 at 2:41:55 AM UTC, smth chntla wrote:
destructively normalize matrix m in-place:

m:add(m:mean(2):mul(-1):expandAs(m))

I can give you other cases as well if you want.

Laurens van der Maaten

unread,
Mar 8, 2015, 10:33:49 PM3/8/15
to tor...@googlegroups.com
The lack of :negate() and :oneOverX() functions has bothered me as well; it forces one to write ugly and sometimes unnecessarily inefficient code (like the :mul(-1) in Soumith's example or the torch.ones(N) allocation in your example). Is there any reason for why these functions are not included in Torch, or are they still on the to-do list?

I guess in your example, you can replace the last four lines of code by: out = torch.mul(gauss, one_over_Z:expandAs(gauss)).

soumith

unread,
Mar 8, 2015, 10:48:05 PM3/8/15
to torch7 on behalf of Laurens van der Maaten
There is a :negate, in the form of: -m, but it is not in-place, it will allocate a new tensor and return it with the negation of m. :mul(-1) is exactly negation.

oneOverX on the other hand, agreed that it would be useful. Will add, tracking it here: https://github.com/torch/torch7/issues/168

Laurens van der Maaten

unread,
Mar 8, 2015, 11:20:56 PM3/8/15
to tor...@googlegroups.com
Cool, thanks!

I know about -m, but I mean a negate() that can also perform the operation in-place. The :mul(-1) trick works, but I guess it uses a lot more computation than necessary because multiplication is more expensive than negation (unless it is not actually multiplying when the argument is -1?).




On Sunday, March 8, 2015 at 7:48:05 PM UTC-7, smth chntla wrote:
There is a :negate, in the form of: -m, but it is not in-place, it will allocate a new tensor and return it with the negation of m. :mul(-1) is exactly negation.

oneOverX on the other hand, agreed that it would be useful. Will add, tracking it here: https://github.com/torch/torch7/issues/168

Ajay Talati

unread,
Mar 9, 2015, 1:45:55 AM3/9/15
to tor...@googlegroups.com
Great - I'll try that later !!!


out = torch.mul(gauss, one_over_Z:expandAs(gauss)).

Thanks Laurens van der Maaten - I owe you a pub lunch !!!
Reply all
Reply to author
Forward
0 new messages