Hi there
I'm looking at using pylearn2.expr.probabilistic_max_pooling to handle the pooling layer in a convolutional DBN I am implementing.
I understand using the following:
max_pool(z, pool_shape, top_down=None)
to do the forward pass. This will pool my detection layer (z, which I call h_0 for the hidden units in the first layer of the DBN) down by a factor of pool_shape[0] in the first dimension, and pool_shape[1] in the second resulting in p_0 and p_0_sampled (if using theano_rng)
My question is then, is there an inverse, or top-down way of supplying p_0 (essentially v_1, the visible units in the 2nd layer of the DBN) and un-pooling to form h_0?
It may be a stupid question, and I may be misunderstanding how the top-down flow works with the pooling layers, as I realize there is even a top_down parameter, representing a top-down "...representing input from above" - but after a bit of a look in the slow python implementation, I'm not so sure this is doing what I'm hoping for.
I wish to determine P(h_i,j = 1 | p_0 ) - this max_pool operation obviously requires "z" - the detection layer it will be pooling - but in my mind doing the top-down inference would only require the layer to un-pool
In fact, I don't fully understand why the max_pool operation returns h and h_samples (the detection layer expectation and sample) when the detection layer is the thing that was being pooled down!
Any help in understanding top-down would be much appreciated!
Thanks