program become very slow when my pool_shape try to pool all feature_map to one

11 views
Skip to first unread message

xiaha...@gmail.com

unread,
Apr 29, 2016, 9:05:11 AM4/29/16
to pylearn-users
here is my yaml
!obj:pylearn2.train.Train {
dataset: &train !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'train',
start: 0,
stop: 50000
},
model: !obj:pylearn2.models.mlp.MLP {
batch_size: 100,
input_space: !obj:pylearn2.space.Conv2DSpace {
shape: [28, 28],
num_channels: 1
},
layers: [ !obj:pylearn2.models.mlp.ConvRectifiedLinear {
layer_name: 'h2',
output_channels: 50,
irange: .05,
kernel_shape: [5, 5],
pool_shape: [20, 20],
pool_stride: [2, 2],
max_kernel_norm: 1.9365
}, !obj:pylearn2.models.mlp.Softmax {
max_col_norm: 1.9365,
layer_name: 'y',
n_classes: 10,
istdev: .05
}
],
},
algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
batch_size: 100,
learning_rate: .01,
learning_rule: !obj:pylearn2.training_algorithms.learning_rule.AdaGrad {
},
monitoring_dataset:
{
'valid' : !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'train',
start: 50000,
stop: 60000
},
'test' : !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'test',
start: 0,
stop: 10000
}
},
cost: !obj:pylearn2.costs.cost.SumOfCosts { costs: [
!obj:pylearn2.costs.cost.MethodCost {
method: 'cost_from_X'
}, !obj:pylearn2.costs.mlp.WeightDecay {
coeffs: [ .00005, .00005 ]
}
]
},
termination_criterion: !obj:pylearn2.termination_criteria.And {
criteria: [
!obj:pylearn2.termination_criteria.MonitorBased {
channel_name: "valid_y_misclass",
prop_decrease: 0.50,
N: 10
},
!obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 50
},
]
},
},
extensions:
[ !obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
channel_name: 'valid_y_misclass',
save_path: "convolutional_network_best.pkl"
}
]
}


when pool_shape=[2,2]or[10,10] It is OK
But if [20,20] or [24,24]

it fir stopped at :
Input shape: (28, 28)
Detector space: (24, 24)
to wait for some Output space: (3, 3)
[2,2] just need less than one minue. but [20,20] need dozen minutes.

then [20,20]/[24,24] would stop here:
Parameter and initial learning rate summary:
h2_W: 0.00999999977648
h2_b: 0.00999999977648
softmax_b: 0.00999999977648
softmax_W: 0.00999999977648
Compiling sgd_update...

i remenber it don't end here with 4 hours...
I seem i go to some bad loop? I'm asking my friend to test in a server computer.
I break on time which be looped in theano.printing.py line 463 and line 492 In the first stooped point.I don't know If it is about.

Any advice, Thanks.

Frédéric Bastien

unread,
Apr 29, 2016, 10:02:43 AM4/29/16
to pylear...@googlegroups.com
Hi,  asking the same questions at a few hours isn't great. We don't have as our only job to reply to mailing list.

I can't answer your question, but I just want to tell you that pylearn2 don't have developers anymore. There is other projects like keras, blocks, and lasagne on top of Theano that you could use.

If you want to investigate, you could use THeano profiler (seach Theano web site).

Fred


--
You received this message because you are subscribed to the Google Groups "pylearn-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylearn-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

xiaha...@gmail.com

unread,
Apr 29, 2016, 11:37:35 AM4/29/16
to pylearn-users
Hi,
Thanks, i just delete previous and rewrite.
I'll try profiler.
Reply all
Reply to author
Forward
0 new messages