CNN with variable length sequences

890 views
Skip to first unread message

alois....@niland.io

unread,
Feb 11, 2016, 9:22:05 AM2/11/16
to lasagne-users
Hi,

I'm trying to find a solution to handle variable length sequences in a convolutionnal neural network. I do some temporal convolutions and use Global Pooling to pool all the outputs. 

I can't think of an easy solution since the theano tensor has a fixed shape. The first idea would be to zero-pad for the shorter sequences, but the zero would be taken as input and their feature maps would be pooled in the Global Pooling.

Do you have any suggestions ?

Best,
Aloïs

Sander Dieleman

unread,
Feb 11, 2016, 5:08:33 PM2/11/16
to lasagne-users
You could pad shorter sequences and also include a mask variable which indicates which positions in the tensor are valid, and which should be ignored. Then you could use a slightly modified global pooling layer that takes into account the mask when pooling. In practice you would probably need to have a second InputLayer for the mask variable, and a custom GlobalPoolLayer that takes two inputs (the regular input + the mask).

Sander

alois....@niland.io

unread,
Mar 9, 2016, 10:39:25 AM3/9/16
to lasagne-users
Hi Sander,

Thank you for your answer. I tried to do it, but i'm having trouble since it is the first time i work with multiple inputs. 
Do you have any example ?

Also, would I need to create another custom layer to propagate the mask through the conv or poolings layers (the mask changes when you go through these layers) ? Or can i use the "get_output_shape" of these layers to get the new mask ?

Thank you,

Aloïs

Sander Dieleman

unread,
Mar 10, 2016, 12:39:21 PM3/10/16
to lasagne-users
Hi Sander,

Thank you for your answer. I tried to do it, but i'm having trouble since it is the first time i work with multiple inputs. 
Do you have any example ?

A layer that takes multiple input layers should inherit from MergeLayer. You can find a bunch of examples in the Lasagne source code on GitHub by looking for 'MergeLayer'.
 
Also, would I need to create another custom layer to propagate the mask through the conv or poolings layers (the mask changes when you go through these layers) ? Or can i use the "get_output_shape" of these layers to get the new mask ?

It might be easiest to just provide the different versions of the masks that you need as separate inputs (i.e. create a separate InputLayer for each). Alternatively you could indeed write custom layers that correctly manipulate the masks. I guess the main thing you would need is something to deal with pooling (convolution shouldn't affect the mask I think?), and a regular max-pooling layer may actually be sufficient for that.

goo...@jan-schlueter.de

unread,
Mar 10, 2016, 1:45:40 PM3/10/16
to lasagne-users
It might be easiest to just provide the different versions of the masks that you need as separate inputs (i.e. create a separate InputLayer for each).

Specifically, if you only need to mask out values at the final GlobalPoolLayer (and not at different stages of the network), it may be easier to compute the correct mask to use for that directly in numpy rather than writing layers that can propagate the mask through all the network. It should be easy to figure out the valid output length from the valid input sequence length and the sequence of convolutions and poolings, with less code than writing new layer classes.

Best, Jan

alois....@niland.io

unread,
Mar 11, 2016, 3:48:40 AM3/11/16
to lasagne-users, alois....@niland.io
Thank you both for your answers !

Stride in convolutions would also affect my mask i think ? I'll try to do it in numpy arrays as Jan suggested !

Best,
Aloïs
Message has been deleted

alois....@niland.io

unread,
Mar 11, 2016, 5:37:17 AM3/11/16
to lasagne-users, alois....@niland.io
Here are the layers :
with really simple tests i'd like to extend ....

Thank you for your answers !

liang16...@gmail.com

unread,
Mar 21, 2020, 9:26:58 AM3/21/20
to lasagne-users
i also think   the stride in convolutions would affect the mask if i will execute multiply convolutions and the final pooling

在 2016年3月11日星期五 UTC+8下午4:48:40,alois...@niland.io写道:
Reply all
Reply to author
Forward
0 new messages