I have a rudimentary question, if anyone has the time to entertain. I couldn't seem to find a matching topic on the forum, but I apologize if I overlooked something. I also hope that my pasted code is readable: I'm not sure if there's a better way to copy/paste here.
I am trying to minimize a function over the space of fixed rank matrices. In the example below, I am trying to optimize on 2x3 matrices of rank 2. The cost function below is just a sample cost function, but the error that occurs does not seem to depend on the particular choice of cost function. Below is my code block, separated from the question text by underscores:
import numpy as np
import autograd.numpy as anp
manifold = pymanopt.manifolds.fixed_rank.FixedRankEmbedded(2, 3, 2)
mat = u @ np.diag(s) @ vt
problem = pymanopt.Problem(manifold, cost)
optimizer = pymanopt.optimizers.SteepestDescent()
result = optimizer.run(problem)
When I try to run the code above, I the error occurs in the last line. It appears that the error message occurs in "~/anaconda3/lib/python3.8/site-packages/autograd/core.py in _mut_add(self, x, y)," and the error message is:
UFuncTypeError: Cannot cast ufunc 'add' output from dtype('O') to dtype('float64') with casting rule 'same_kind'
This error happens in the file at the point of
def _mut_add(self, x, y): x += y; return x
which is said to be in line 213 of the "core.py" file.
It looks to me like at first the output of my function is interpreted as a float64, then later converted to an "object." I think the "object" is an autograd ArrayBox or something like that. When I run similar code on the manifold S^n (with a different cost function that still outputs float64), I don't get a similar error: it runs fine. Is it possible that the error stems from the fact that I am recovering the matrix in the cost function itself? Any tips would be appreciated. Thanks!