Hello Jan!
Thank you for your quick reply and insights.
I did the same test as Erfan, and in accordance to what he reported I also have small a improvement when compared to the "normal" Conv network - so it's as excepted slightly faster now:)
Now, because you asked to delete the get_W_shape() method of the DilatedConv2DLayer, the shape of the params are now different (they have an equivalent shape as the "normal" Conv layer).
They look like this:
Number of parameters of Conv network 48064
Net params:
0 : (64, 30, 5, 5)
1 : (64,)
Number of parameters of DilatedConv network 17344
Net params:
0 : (30, 64, 3, 3)
1 : (64,)
Number of parameters of CustomDilatedConv network 17344
Net params:
0 : (64, 30, 3, 3)
1 : (64,)
This is bit of a problem because I have trained a network in which the weights were saved in accordance to the "default" DilatedConv2DLayer, but I would like to use them in the same network but replacing the "default" layers with this CustomDilatedConv class to achieve higher processing speeds. Probably this is easy to address, but honestly I don't know much about the Lasagne/Theano mechanics, I'm just , well , a lasagne user :)
So what would be best to do?
1) Adapt the weights to this new format?
or
2) Customize the get_W_shape() ?
Thanks again for your advice,
André