volumetric cross entropy criterion

57 views
Skip to first unread message

Måns Larsson

unread,
Jun 17, 2016, 5:50:39 AM6/17/16
to torch7
Hi!
I'm working with a project where I'm interested in applying voxel-wise cross entropy loss on volumetric data.
I saw that there is an implementation for SpatialCrossEntropyCriterion.lua but not for volumetric.
Are there any plans on implementing this for volumetric data or could someone suggest a workaround for doing this?

Thanks in advance!

Måns Larsson

unread,
Jun 27, 2016, 4:53:30 AM6/27/16
to torch7
For anyone interested here is a workaround

      criterion = cudnn.SpatialCrossEntropyCriterion(class_weights)

     
local f = 0
     
local outputs = model:forward(inputs)
     
local df_do = torch.Tensor(outputs:size())
      df_do
= df_do:cuda()
     
      depth
= outputs:size(3) -- loop over smallest dimension
     
for i = 1,depth do -- calculate criterion for each slice independetly
        out_slice
= outputs[{{},{},{i},{},{}}]:squeeze(3):contiguous()
        target_slice
= targets[{{},{i},{},{}}]:squeeze(2)
       
local err = criterion:forward(out_slice, target_slice)
       
        f
= f + err
       
        df_do_slice
= criterion:backward(out_slice, target_slice)
         
        df_do
[{{},{},{i},{},{}}] = df_do_slice
     
end
     
     
if criterion.sizeAverage then
        df_do
= df_do:div(depth)
     
end
     
      model
:backward(inputs, df_do)

\\Måns

Adam Paszke

unread,
Jun 27, 2016, 5:32:03 AM6/27/16
to torch7 on behalf of Måns Larsson
Hi,
In fact it might be faster to do it in one pass. You might want to try something like this:

-- Prepare outputs
local flat_outputs = outputs:transpose(2, 3)     -- BVHWC -> BHVWC
local transposed_size = flat_outputs:size()            -- save size for reshaping the gradient 
flat_outputs = flat_outputs:view(-1, flat_outputs:size(3), flat_outputs:size(4), flat_outputs:size(5)) -- merge BH into one dimension
flat_outputs = flat_outputs:contiguous()         -- ensure they're contiguous

-- Prepare targets
local flat_targets = targets:view(-1, targets:size(2), targets:size(3)):contiguous()

-- Get the gradient
local err = criterion:forward(flat_outputs, flat_targets)
local flat_df_do = criterion:backward(flat_outputs, flat_targets)
df_do = flat_df_do:reshape(transposed_size):transpose(2, 3):contiguous()

Best,
Adam

--
You received this message because you are subscribed to the Google Groups "torch7" group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages