Dear Manopt users,
We are excited to announce the Manopt 7.0 release, with numerous improvements.
The star new feature is automatic differentiation (AD), implemented by Xiaowen Jiang during a summer internship -- many thanks to him!
(Don't forget to execute importmanopt.m at the Matlab prompt.)
To give AD a spin, make sure you have Matlab R2021a (or later) and an up-to-date Deep Learning Toolbox, then run:n = 5;
A = randn(n); A = A+A';
problem.M = spherefactory(n);
problem.cost = @(x) x'*A*x;
problem = manoptAD(problem); % figures out gradient and Hessian
x = trustregions(problem);
On the tutorial page, you can now also find links to books about Riemannian optimization as well as to videos introducing the necessary basics of geometry:
Don't hesitate to pass the news around,
The Manopt team
PS: Manopt is also on GitHub
, open to your input.