Dear Fan,
In principle, the manifold only includes matrices of rank exactly k, represented as YY^T, where Y has full rank.
However, on the boundary of that manifold, we find matrices of rank < k, represented as YY^T with Y rank deficient.
These rank-deficient matrices are not part of the manifold, but it is numerically difficult to prevent convergence to them. One way to think about it is: with the representation YY^T, the set of matrices of rank exactly k is an open set. As long as we remain in the open set, we enjoy a nice Riemannian geometry, which is what allows theory and algorithms to work. But if we ever "step out" and get to a rank-deficient matrix, we lose the Riemannian geometry, and theory breaks down, and algorithms may fail (getting NaN's, inf's, not converging...).
I tend to think is like this: if the interior of the unit square in the plane is a manifold, then as long as I remain in the square, I see a nice two-dimensional manifold. But if I converge to an edge of the square, that geometry will break down (and likewise if I converge to a vertex.)
So, what now? In practice, if the optimal solution to your optimization problem has rank < k, then the optimization algorithm is likely to converge to that -- and hence to step out of the manifold. There can be two scenarios: either that's actually what you want (this happens in certain problems), and so you may implement a stopping criterion that will stop the algorithm if rank deficiency is encountered (with options.stopfun for example, see documentation in
https://manopt.org/tutorial.html) ; or you do not want that, and in that case you could examine your cost function again and investigate why it is that optimizers tend to be rank deficient. Either that's a good thing, in which case you may want to work with a smaller k, or it's a bad thing, in which case you may want to add a penalty/regularizer in your cost function to promote solutions of full rank (e.g., something with det(Y'*Y)).
I hope this makes sense.
Best,
Nicolas