Lasso or elastic net regression

28 views
Skip to first unread message

Ursela Barte

unread,
Mar 29, 2022, 12:17:00 PM3/29/22
to Ceres Solver
Hi there ceres-team,
I need to implement Lasso regression for my problem. Based on this: https://stackoverflow.com/questions/68043387/how-to-add-lasso-l1-norm-reisudial-in-ceres-solver I thought it would suffice to exchange the loss function with Huberloss or SoftL1loss. I did a few tries, varying the parameter given to the loss function, but I couldnt see any change or induced sparcity in my resulting optimization parameters. The optimization parameter values after optimization are small between -0.4 and 0.4 normally.
Do I need to adjust the code further?
Any hints would be appreciated!
Cheers
Ursela
 

Sameer Agarwal

unread,
Mar 29, 2022, 12:19:24 PM3/29/22
to ceres-...@googlegroups.com
Ursela,
Did you just add the loss function to the residuals or did you also add a penalty term for the solution vector/parameters?
Sameer


--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/9a5fb824-f9e0-4678-bcec-80cc85c554e5n%40googlegroups.com.

Ursela Barte

unread,
Mar 29, 2022, 12:25:45 PM3/29/22
to Ceres Solver
Just added the loss function. Can I add the penalty term within the same residual or does it need to be a separate one? The penalty would basically have to be the number of elements != 0 in the solution vector, right?

Sameer Agarwal

unread,
Mar 29, 2022, 12:33:41 PM3/29/22
to ceres-...@googlegroups.com
Ursela,
If you want the solution vector to be sparse, you need  to add a term that minimizes its l1 norm, so I recommend adding a new residual which just returns the entire parameter vector as the residual and adding a L1 loss to that residual block. Adding L1norm or Huber to the data term only robusifies them it does not make the solution sparse.

The basic nonlinear least squares problem is

\sum_i f^2_i(x_i,theta)

where x_i is your data and theta is the parameter you are trying to fit.

What you are doing right now is

\sum_i L(f^2_i(x_i,theta))

where L is some loss function.

what you want to solve is

\sum_i f^2_i(x_i,theta) + \lambda  * |theta|_1

where |x|_1 indicates the 1-norm of the parameter vector.

so you want a residual block corresponding to the lamba  * |theta|_1

since we do not have l1 norm the next best thing is to use a smooth approximation to l1 norm using SoftL1Loss and solve

\sum_i f^2_i(x_i,theta) + \lambda  * L(|theta|^2)

where L is the SoftL1Loss. 

HTH,
Sameer


Ursela Barte

unread,
Mar 29, 2022, 12:43:18 PM3/29/22
to Ceres Solver
Hi Sameer,
thanks for the elaborate answer and your effort! I will have to do some additional reading to fully understand it. If I should still fail in a week or so, I might come back to you on this again.
Have a great week!!
Ursela
Reply all
Reply to author
Forward
0 new messages