Hello Niklas,
Thank you for your question.
The short answer is that, currently, Manopt does not offer ready-to-use tools to solve optimization problems on a manifold with additional constraints (as is your case) -- but we are working on it this summer! If you would like to share some data for your problem (cost function, matrix A), we would certainly be interested in taking a look.
Until then, here are a couple ideas for how one could approach your problem, in the order that I would try them based on ease of implementation more than based on expected results:
- splitting methods: if it is easy enough to project to the cone C, you could artificially introduce a new variable y, under the seemingly pointless constraint x = y, and optimize: min f(x) + I_C(y) s.t. x on the sphere and x = y, where I_C(y) is 0 if y is in the cone C, and infinite otherwise. Then, you can optimize by alternating between x and y, and also an extra variable which will correspond to Lagrange multipliers for the constraint x = y. This rough sketch is explained much more precisely in the MADMM paper:
https://link.springer.com/chapter/10.1007/978-3-319-46454-1_41 (also on arxiv) -- I didn't check this in detail but I expect this may work.
- penalty methods: you could remove the constraint that x \in C, and instead have a penalty term in the cost function, where you want to simultaneously minimize your cost and you want to minimize max([Ax ; 0]) (the largest entry of that vector which contains the entries of Ax and also 0): in practice, you minimize a weighted sum of both, and there is a need to iterate over the weights to find "the right one". Notice that max(Ax) is a nonsmooth function, and currently manopt doesn't do nonsmooth: we're also working on that. In the meantime, you can replace max([Ax ; 0]) by a smooth surrogate, using the log-sum-exp trick:
https://hips.seas.harvard.edu/blog/2013/01/09/computing-log-sum-exp/. (On that last webpage's notation, z is an approximation for max(x), and one would typically scale down x by epsilon to increase differences, and scale the result back by epsilon.).
- If it is easy to initialize inside the feasible set, you could also try barrier methods, that is: you would have a penalty which goes to infinity as you approach the boundary of the cone. Something like sum(log(-Ax)) (but I didn't think about this much). Here also there would be a weight parameter to update progressively.
Note: there is an additional issue I have overlooked here, namely, that your cone is open. That's an issue for optimization, since your problem may then have no solution at all. I wrote my answer assuming you would be happy with a solution in {x : Ax <= 0} (the closure).
I hope this helps,
Nicolas