Understanding atrous convolution

20 views
Skip to first unread message

nila...@gmail.com

unread,
Mar 18, 2017, 8:16:38 AM3/18/17
to Caffe Users
Some approaches towards semantic segmentation employ atrous or dilated convolution, and it is stated that this technique helps to avoid the problem of low resolution feature maps. To me, it is not clear how this can be beneficial in a fully convolutional architecture.

From my understanding, atrous convolution produces an output similiar to what a combination of pooling and regular convolution would produce, but with preserved spatial resolution. It is however not practical to simply replace all pooling layers in the network since then the feature maps would consume too much memory. So I assume, atrous convolution is only used in the skip connections? If this is the case, how is it different from simply branching the network right before the pooling layer, since in both cases the spatial resolution will be the same?

I would be grateful if someone could clarify what I am missing here.
Reply all
Reply to author
Forward
0 new messages