Regression with MLP?

1,450 views
Skip to first unread message

benh...@gmail.com

unread,
Jun 16, 2013, 7:52:07 PM6/16/13
to pylea...@googlegroups.com
How do I do regression with MLP?  Do I set the
output layer to pylearn2.models.mlp.Linear?

Thank You,
Omar

Ian Goodfellow

unread,
Jun 16, 2013, 8:55:57 PM6/16/13
to pylea...@googlegroups.com
Yes, or LinearGaussian if you want to learn the conditional variance as well as the conditional mean. 
--
 
---
You received this message because you are subscribed to the Google Groups "pylearn-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylearn-dev...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 


--
Sent from Gmail Mobile

benh...@gmail.com

unread,
Jun 16, 2013, 9:59:58 PM6/16/13
to pylea...@googlegroups.com
Thank you, I am try to run MLP with a linear layer,
but I can't get it working.  I have a dense_design_matrix
with a y vector with length 228605 and a X vector with 228605 examples and 33 features.

I get the following error:

"Compiling accum done. Time elapsed: 4.000000 seconds
Traceback (most recent call last):
  File "/home/oab102/pylearn2/pylearn2/scripts/train.py", line 141, in <module>
    train_obj.main_loop()
  File "/home/oab102/pylearn2/pylearn2/train.py", line 128, in main_loop
    self.run_callbacks_and_monitoring()
  File "/home/oab102/pylearn2/pylearn2/train.py", line 150, in run_callbacks_and_monitoring
    self.model.monitor()
  File "/home/oab102/pylearn2/pylearn2/monitor.py", line 192, in __call__
    a(X, y)
  File "/usr/lib/python2.7/site-packages/Theano-0.6.0rc3-py2.7.egg/theano/compile/function_module.py", line 498, in __call__
    allow_downcast=s.allow_downcast)
  File "/usr/lib/python2.7/site-packages/Theano-0.6.0rc3-py2.7.egg/theano/tensor/basic.py", line 803, in filter
    data.shape))
TypeError: ('Bad input argument to theano function with name "Monitor.accum[0]"  at index 1(0-based)', 'Wrong number of dimensions: expected 2, got 1 with shape (1000,).')
"
My YAML file is as follows:

"
!obj:pylearn2.train.Train {
    dataset: &train !pkl: "mydata.pkl",
    
    model: !obj:pylearn2.models.mlp.MLP {
        layers: [
                 !obj:pylearn2.models.mlp.Sigmoid {
                     layer_name: 'h0',
                     dim: 200,
                     sparse_init: 15,
                 },
        !obj:pylearn2.models.mlp.Linear {
                     layer_name: 'y',
                   #  n_classes: 1,
                     dim:  1,
                     irange: 0.
                 }
                ],
        nvis: 33,
    },
       algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
        batch_size: 1000,
        learning_rate: .01,
        init_momentum: .5,
        monitoring_dataset : !pkl: "mydata.pkl"
         
       ,
        termination_criterion: !obj:pylearn2.termination_criteria.MonitorBased {
            channel_name: "valid_y_misclass",
        
        }
    },
    extensions: [
        !obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
             channel_name: 'valid_y_misclass',
             save_path: "mlp_best.pkl"
        },
    ]
}
"


On Sunday, June 16, 2013 8:55:57 PM UTC-4, Ian Goodfellow wrote:
Yes, or LinearGaussian if you want to learn the conditional variance as well as the conditional mean. 

On Sunday, June 16, 2013, wrote:
How do I do regression with MLP?  Do I set the
output layer to pylearn2.models.mlp.Linear?

Thank You,
Omar

--
 
---
You received this message because you are subscribed to the Google Groups "pylearn-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylearn-dev+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

Ian Goodfellow

unread,
Jun 16, 2013, 10:07:41 PM6/16/13
to pylea...@googlegroups.com
I think y needs to be a matrix with 1 column, rather than a vector.
Pascal, is it possible to make a better error message in this case now that we have your data specification system?
To unsubscribe from this group and stop receiving emails from it, send an email to pylearn-dev...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 
Message has been deleted

benh...@gmail.com

unread,
Jun 16, 2013, 10:30:23 PM6/16/13
to pylea...@googlegroups.com
I think y is 1 column though.

The following code is how I created the data:

"
xx = numpy.loadtxt(open("xdataee.txt","rb"),delimiter=" ",skiprows=1)
yy = numpy.loadtxt(open("ydataee.txt","rb"),delimiter=" ",skiprows=1)


totalmatrix = dense_design_matrix.DenseDesignMatrix(X=xx,y=yy)
totalmatrix.use_design_loc('train_design.npy')
serial.save('mydata.pkl', totalmatrix)

Ian Goodfellow

unread,
Jun 16, 2013, 11:14:44 PM6/16/13
to pylearn-dev
Print xx.ndim and yy.ndim. Both of them need to be 2.
>>>>> an email to pylearn-dev...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from Gmail Mobile
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google Groups
>>> "pylearn-dev" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to pylearn-dev...@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>
>>
>>
>> --
>> Sent from Gmail Mobile
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "pylearn-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to pylearn-dev...@googlegroups.com.

benh...@gmail.com

unread,
Jun 17, 2013, 10:20:31 AM6/17/13
to pylea...@googlegroups.com
It worked, thank you.

David Reichert

unread,
Jun 21, 2013, 6:18:58 PM6/21/13
to pylea...@googlegroups.com

Just to make sure re the YAML file that was posted, there is no valid_y_misclass channel for the linear layer, right? Is using the valid_objective channel added by SGD correct?

David R

Ian Goodfellow

unread,
Jun 21, 2013, 9:05:42 PM6/21/13
to pylearn-dev
That is correct.

Amogh Gudi

unread,
Aug 25, 2015, 6:56:27 AM8/25/15
to pylearn-dev
Just to let you guys know, there is one important difference in the Mean Squared Error computation in "dataset_y_mse" channel of the LinearGaussian layer, and in the "dataset_objective" of the Linear layer (with use_abs_loss=false) for multivariate (multiple output labels) regression:

In LinearGaussian layer, the MSE is computed correctly as the squared difference between prediction and target labels averaged over all labels in one example, averaged over the whole dataset.
rval['mse'] = T.sqr(state - targets).mean()

In Linear layer, the cost function (which I assume is trying to calculate MSE), is computed as the squared difference between prediction and target labels summed over all labels in one example, and then averaged over the whole dataset.
T.sqr(Y - Y_hat).sum(axis=1).mean()

I don't think what Linear layer implements is something standard. It doesn't really fit the definition of Sum Squared Error (which is summer over data samples, not labels). Also, this computed error is dependent on the number of labels present for an example, which is annoying. 

Any comments?

Ian Goodfellow

unread,
Sep 4, 2015, 8:37:47 PM9/4/15
to pylea...@googlegroups.com
On Tue, Aug 25, 2015 at 3:56 AM, Amogh Gudi <klr...@gmail.com> wrote:
> Just to let you guys know, there is one important difference in the Mean
> Squared Error computation in "dataset_y_mse" channel of the LinearGaussian
> layer, and in the "dataset_objective" of the Linear layer (with
> use_abs_loss=false) for multivariate (multiple output labels) regression:
>
> In LinearGaussian layer, the MSE is computed correctly as the squared
> difference between prediction and target labels averaged over all labels in
> one example, averaged over the whole dataset.
> rval['mse'] = T.sqr(state - targets).mean()
>
> In Linear layer, the cost function (which I assume is trying to calculate
> MSE), is computed as the squared difference between prediction and target
> labels summed over all labels in one example, and then averaged over the
> whole dataset.
> T.sqr(Y - Y_hat).sum(axis=1).mean()
>
> I don't think what Linear layer implements is something standard. It doesn't
> really fit the definition of Sum Squared Error (which is summer over data
> samples, not labels). Also, this computed error is dependent on the number
> of labels present for an example, which is annoying.
>
> Any comments?

The linear layer is trained to maximize the expected value of log p(y
| x) where y is distributed according to a conditional Gaussian
distribution with variance 1. This corresponds to a sum across outputs
and a mean across examples. It is annoying that the magnitude of the
cost changes when the number of outputs changes, but that's necessary
to make sure it's scaled correctly compared to other costs. (For
example, if you also have the mean squared error across another set of
variables in another cost) If you wanted to send a pull request adding
a flag to make it take the mean across outputs I think that would be a
useful feature.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages