> _______________________________________________
> SciPy-User mailing list
> SciPy...@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
Very generally speaking, you can always find the root by minimization,
if you square your function (simply because no negative values are
possible)!
H
Warren
--
Warren Weckesser
Enthought, Inc.
515 Congress Avenue, Suite 2100
Austin, TX 78701
512-536-1057
Warren, thanks for your input.
Do you know a way to add constraints to fsolve, or some other root finding technique? If there's no other option, I'll have to go with Harald's suggestion, even if it is slow to converge.
Also, does anyone know what the input format is for these minimization techniques (fmin_l_bfgs_b, fmin_tnc, fmin_cobyla), I tried:
Always copy-and-paste the traceback, not just the final message. For
the fmin_cobyla constraints, you don't pass a function that returns a
list. You pass a list of functions.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
Oh, you mean for the function itself. Return an array, not a list.
Am I missing something here?
I ran into the same problem.
One thing you can try is add a couple of lines to your function
so that it returns a constant value (which is not a solution to your
problem) when (1-x**2) < 0, instead of the actual calculation.
In my case it did the trick.
Ernest
Yes.
> Then Newton's method should locally converge quadratically.
>
No, because the derivative of the function being minimized is zero at
the root. In this case the convergence of Newton's method is only linear.
Warren
Yes, if you have a f(x) = f'(x) = 0 then you destroy the local
quadratic convergence of Newton's method.
But in nonlinear programming you do:
f(x) = | F(x)|^2
x_* = argmin_x f(x)
Then you define the first order necessary conditions for optimality:
0 = df(x)/dx
Therefore
0 =G(x) := 2 dF/dx F
I.e. you get a nonlinear system that you can solve with Newton's method.
If you differentiate once more you get
H = 2 (d^2/dx F) F + 2 (d/dx F)^2
and use the update rule
x_+ = x - H(x)^-1 G(x)
in our case H is symmetric positive definite and therefore Newton
should converge quadratically.
x_* = argmin_x f(x)
subject to g(x) <= 0
then you can try to do:
x_* = argmin_x f(x) + \rho (max(g(x), 0))^p
where e.g. p =2, and make \rho large
problem: badly conditioned for \rho very large.
But that's still much better than adding a constant when g(x) > 0!
2009/8/4 Ernest Adrogué <eadr...@gmx.net>:
But... this won't prevent the function from being evaluated
outside of its domain, which is the real issue here, will it?
2009/8/4 Ernest Adrogué <eadr...@gmx.net>:
Yes. :)
i would like to try ipopt, unfortunately it depends on third-party
libraries with draconian licenses. only mumps seems to actually be
"free", and yet in order to get it you have to fill a form and wait
until they send you the software by mail!!
I use the ma27 (I think it is called) as academic user. It's not as
bad as you wrote above:
You register, you get a confirmation mail with a password, then you
can log in an download the code as a couple of fortran source code
files.
So, not such a big deal.
2009/8/4 Ernest Adrogué <eadr...@gmx.net>:
In all fairness, a little while later I received an e-mail with the link to
download MUMPS.
The README file states that the software is free of charge and in the public
domain. Why they don't put the download link directly on the web I don't know.
2009/8/6 Ernest Adrogué <eadr...@gmx.net>:
>From the ipopt website:
"""
Currently, the following linear solvers can be used:
* MA27 from the Harwell Subroutine Library
(see http://www.cse.clrc.ac.uk/nag/hsl/).
* MA57 from the Harwell Subroutine Library
(see http://www.cse.clrc.ac.uk/nag/hsl/).
* MUMPS (MUltifrontal Massively Parallel sparse direct Solver)
(see http://graal.ens-lyon.fr/MUMPS/)
* The Parallel Sparse Direct Solver (PARDISO)
(see http://www.computational.unibas.ch/cs/scicomp/software/pardiso/).
Note: The Pardiso version in Intel's MKL library does not yet support the features necessary for IPOPT.
* The Watson Sparse Matrix Package (WSMP)
(see http://www-users.cs.umn.edu/~agupta/wsmp.html)
You should include at least one of the linear solvers above in order to run IPOPT, and if you want to be able to switch easily between different alternatives, you can compile IPOPT with all of them.
"""
So, I guess, yes, it is supported. Can't tell you for certain
because the Debian distribution that I'm using lacks a certain header
file from the metis/scotch library which is required to compile MUMPS,
so haven't had a chance to try it out yet.