inverse

70 views
Skip to first unread message

Chokri Sandy

unread,
Mar 20, 2014, 3:17:28 PM3/20/14
to yal...@googlegroups.com
Help please

I would like to minimize e_bar , matlab dose not accept the inverse all parameters are known except nu

nu = sdpvar(1,1);

e_bar = ( -Am^-1 + (Am +(1/nu)*P^-1*Am'*P+B*Theta_star )^-1 * ((1/nu)*P^-1*Am'*P*Am^-1+eye(3)))*Bm*r

Error using sdpvar/mpower (line 54)
Only scalars can have negative or non-integer powers

Johan Löfberg

unread,
Mar 20, 2014, 3:22:11 PM3/20/14
to yal...@googlegroups.com
You cannot have an inverse in an expression involving decision variables. You have to come up with a model which avoids the inverse

Johan Löfberg

unread,
Mar 20, 2014, 3:24:06 PM3/20/14
to yal...@googlegroups.com
and this indicates e_bar is a matrix, which makes no sense as you say you want to minimize (what does it mean to minimize a matrix?)

If only 1/mu enters your model, why do you work with mu? Parameterize the problem in the variable z = 1/mu instead. And then get rid of the inverse...

Mark L. Stone

unread,
Mar 20, 2014, 4:15:09 PM3/20/14
to yal...@googlegroups.com
There can be workarounds for lack of matrix inverse support, such as introducing auxiliary variables and constraints.  However, as I have all too painfully discovered, in some cases, such workarounds can significantly degrade the model, and you may be best avoiding them.  So, for example, calling fmincon or knitro directly from MATLAB may allow you to directly handle matrix inverses, and in some cases be far superior to using YALMIP as a front end and working around lack of matrix inverse support.  On the other hand, when the solver itself doesn't support matrix inverses, such as BARON for global optimization, then you're out of luck.

Hint, hint, I'm sure there would be some effort in developing support in YALMIP for matrix inverse, but on the other hand, there is a well-developed matrix calculus for differentiation, including matrix inverse, chain rule, etc.  Thanks.

Johan Löfberg

unread,
Mar 20, 2014, 4:23:30 PM3/20/14
to yal...@googlegroups.com
Yes, working around an inverse by introducing new variables is often detrimental and manually working with evaluation and derivatives on the inverse operator would be far more efficient. Best is of course to get rid of the inverse in the model if possible

Unfortunately, the introduction of the inverse as a supported operator is currently hindered by the fact that YALMIPs framework for function evaluation and derivatives is limited to elementwise operators R^n->R^n and general R^n-R functions

Mark L. Stone

unread,
Mar 20, 2014, 4:39:20 PM3/20/14
to yal...@googlegroups.com
You can do it with elementwise operators if you make matrices (and vectors) your elements.  Of course these won't have a domain of R^n :)

Johan Löfberg

unread,
Mar 20, 2014, 4:41:25 PM3/20/14
to yal...@googlegroups.com
but that's not YALMIP but something else one would write from scratch. I'll leave that for the kids

Chokri Sandy

unread,
Mar 20, 2014, 5:18:11 PM3/20/14
to yal...@googlegroups.com
dear Johan
attached is my code , I need to find the maximum nu such that G44 is satisfied , G44 depend on omega so G33 need to be solved , G33 depend on theta so G11 and G22 need to be solve .
Thank you
mech336_p2.m

Johan Löfberg

unread,
Mar 20, 2014, 5:24:39 PM3/20/14
to yal...@googlegroups.com
You will not be able to solve this using any solvers interfaced with YALMIP

and why do you say LMI in your code? It is obviously far from linear.
Reply all
Reply to author
Forward
0 new messages