Precisely, that's the limitation I talked about.
However, by simply trying to solve the problem without using optimizer framework, you see that ipopt doesn't like this problem
optimize([constraints, xsim(:,i) == x{1}, u_init == usim_init], objective,sdpsettings('debug',1,'solver','ipopt'))
EXIT: Invalid number in NLP function or derivative detected.
ans =
struct with fields:
yalmiptime: 1.5279
solvertime: 0.1931
info: 'YALMIP called solver with incorrect input (IPOPT)'
problem: 7
The reason is that ipopt will try to start from an initially infeasible point which happens to give inf in a logarithm.
I hope this whole code isn't some idea that you can avoid constrained optimization by relaxing constraints to logrirhmic barriers. That will only be more complicated than simply solving the constrained problem, as ipopt uses barriers internally anyway.
What you will have to do is that you have to initialize a feasible solution (either manually, or by solving an initial program which leads to a solution in the interior where the logarithms are defined), and then use that as an initial condition
It appears to be the nasty logarithmic stuff in the objective which is problematic. Solving the problem without objective to get an initial solution, and then solving with objective from there, at least gets IPOPT started. Unfortunately, the problem is not well-behaved, so ipopt is really slow on the second call
optimize([constraints, xsim(:,i) == x{1}, u_init == usim_init], [],sdpsettings('solver','ipopt'));;
optimize([constraints, xsim(:,i) == x{1}, u_init == usim_init], objective,sdpsettings('usex0','solver','ipopt'));;