NameError with nsolve

18 views
Skip to first unread message

New2Sympy

unread,
Jul 31, 2009, 10:08:46 AM7/31/09
to sympy
Hi All,

I am trying to get the Real root as shown below and keep getting a
NameError. Any suggestions?

I am using python 2.6.2 and sympy 0.6.5.

The solution should be:

a=0.31883011387319

>>> from sympy import Symbol, nsolve
>>> a=Symbol('a')
>>> f1=1/(0.001+a)**3-6/(0.9-a)**3
>>> print nsolve(f1,a,0.3)

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/site-packages/sympy/solvers/solvers.py",
line 929, in nsolve
return findroot(f, x0, **kwargs)
File "/usr/lib/python2.6/site-packages/sympy/mpmath/mptypes.py",
line 68, in g
return f(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/sympy/mpmath/
optimization.py", line 914, in findroot
fx = f(*x0)
File "<string>", line 1, in <lambda>
NameError: global name 'conjugate' is not defined



Thanks.

Regards

smichr

unread,
Aug 1, 2009, 6:14:34 PM8/1/09
to sympy


On Jul 31, 7:08 pm, New2Sympy <anartz.li...@gmail.com> wrote:
> Hi All,
>
> I am trying to get the Real root as shown below and keep getting a
> NameError. Any suggestions?
>
[cut]
> NameError: global name 'conjugate' is not defined
>

When you ask for a numerical solution, the first thing that nsolve
does is evaluate your function and then create a "lambdified" version
of it to pass to the solver. When you evaluate your function it looks
like this:

>>> 1/(.001+a)**3-6/(.9-a)**3
(0.001 + a)**(-3) - 6/(0.9 - a)**3
>>> _.evalf()
3.0e-6*conjugate(a)/(1.0e-6 + 0.001*a + 0.001*conjugate(a) +
a*conjugate(a))**3 + 14.58*conjugate(a)/(0.81 - 0.9*a - 0.9*conjugate
(a) + a*conjugate(a))**3 + 1.0e-9/(1.0e-6 + 0.001*a + 0.001*conjugate
(a) + a*conjugate(a))**3 - 4.374/(0.81 - 0.9*a - 0.9*conjugate(a) +
a*conjugate(a))**3 + 0.003*conjugate(a)**2/(1.0e-6 + 0.001*a +
0.001*conjugate(a) + a*conjugate(a))**3 - 16.2*conjugate(a)**2/(0.81 -
0.9*a - 0.9*conjugate(a) + a*conjugate(a))**3 + conjugate(a)**3/
(1.0e-6 + 0.001*a + 0.001*conjugate(a) + a*conjugate(a))**3 +
6.0*conjugate(a)**3/(0.81 - 0.9*a - 0.9*conjugate(a) + a*conjugate(a))
**3

All those conjugates are there because sympy (I imagine) doesn't know
whether a will be real or imaginary, so it's covering all bases. The
problem is, in order to evaluate this, you need sympy but the only
thing that nsolve passes along to the solver is mpmath's module. The
solution is to pass sympy, too. I also like to watch the iteration
process so I set the verbose flag to true:

>>> nsolve(1/(.001+a)**3-6/(.9-a)**3,a,.3,modules="sympy",verbose=True)
x: 0.315560066156274077187
error: 0.25
x: 0.318099998922522143874
error: 0.234439933843725911711
x: 0.31882324267983762464
error: 0.00253993276624806668673
x: 0.318830099704768381578
error: 0.000723243757315480766644
x: 0.318830113872912139068
error: 0.00000685702493075693761643
x: 0.318830113873185857794
error: 1.4168143757489826177e-8
x: 0.318830113873185911755
error: 2.7371872575728692042e-13
x: 0.318830113873185898265
error: 5.39615044542560845509e-17
x: 0.318830113873185905365
error: 1.34903761135640211377e-17
x: 0.31883011387318590714
error: 7.10020074079996919847e-18
x: 0.318830113873185905587
error: 1.77506342008979314806e-18
mpf('0.31883011387318591')

And there you go!

Notes:

1) If you want to use the sure-fire bisect method, don't forget to
send in two values for the starting point:

>>> nsolve(1/(.001+a)**3-6/(.9-a)**3,a,(.1,.5),modules="sympy",method="bisect")
mpf('0.3188301138731859')

2) For nasty functions like this an alternative approach would be to
find zeros of the numerator instead of the whole expression, or solve
the problem symbolically and substitute in the values when you are
done:

>>> num,den = (1/(.001+a)**3-6/(.9-a)**3).as_numer_denom()
>>> nsolve(num,a,.3) # no need for sympy now
mpf('0.31883011387318584')
>>>

or

>>> var('c1 c2');solve(1/(c1+a)**3-6/(c2-a)**3,a)

Hmmm...it hangs. (I filed this as issue 1571). But we can use the
numerator tric to make things easier for the solver:

>>> num = (1/(c1+a)**3-6/(c2-a)**3).as_numer_denom()[0]
>>> ans=solv(num,a)
>>> [t.subs({c1:.001,c2:.9}).evalf(n=4) for t in ans]
[3.188e-1, 3.216e-2 - 5.706e-1*I, 3.216e-2 + 5.706e-1*I]

Best regards, and welcome to sympy!
/c

New2Sympy

unread,
Aug 3, 2009, 4:25:57 AM8/3/09
to sympy
Thanks for your help! that was a complete answer :)

Best regards.

smichr

unread,
Aug 3, 2009, 7:55:17 AM8/3/09
to sympy


On Aug 3, 1:25 pm, New2Sympy <anartz.li...@gmail.com> wrote:
> Thanks for your help! that was a complete answer :)
>

...I could set my verbose flag to False :-) But sometimes these
questions are good opportunities for learning, too. It's something
that I've appreciated about responses on the python tutor list.

/c

Ondrej Certik

unread,
Aug 3, 2009, 12:30:26 PM8/3/09
to sy...@googlegroups.com

Indeed, your responses are excellent. Thanks for doing that. :)

Ondrej

Vinzent Steinberg

unread,
Aug 5, 2009, 12:29:10 PM8/5/09
to sympy
On Aug 2, 12:14 am, smichr <smi...@gmail.com> wrote:
> >>> num,den = (1/(.001+a)**3-6/(.9-a)**3).as_numer_denom()
> >>> nsolve(num,a,.3) # no need for sympy now

Maybe solve()/nsolve() should do this for you. What do you think?

Vinzent

smichr

unread,
Aug 6, 2009, 1:57:25 PM8/6/09
to sympy


On Aug 5, 9:29 pm, Vinzent Steinberg
I was thinking the same thing, but you would have to watch out that
you didn't introduce spurious roots: e.g.

>>> (x/(x-1)).diff(x)
-1/(1 - x) - x/(1 - x)**2
>>> n,d=_.as_numer_denom();n/d
(-x*(1 - x) - (1 - x)**2)/(1 - x)**3

Unless we cancel out the common factors (something that isn't working
in general yet but will hopefully work with the new polys module) we
will get a root of x=1. I was also wondering if the routine shouldn't
try to solve the expression symbolically to see if it's possible
before trying to solve it numerically...or should sympy just trust the
user and not waste the time checking symbolic solutions if a numeric
solution has been requested?

/c

/c

Vinzent Steinberg

unread,
Aug 6, 2009, 2:26:03 PM8/6/09
to sympy
On 6 Aug., 19:57, smichr <smi...@gmail.com> wrote:
> On Aug 5, 9:29 pm, Vinzent Steinberg
>
> <vinzent.steinb...@googlemail.com> wrote:
> > On Aug 2, 12:14 am, smichr <smi...@gmail.com> wrote:
>
> > > >>> num,den = (1/(.001+a)**3-6/(.9-a)**3).as_numer_denom()
> > > >>> nsolve(num,a,.3) # no need for sympy now
>
> > Maybe solve()/nsolve() should do this for you. What do you think?
>
> > Vinzent
>
> I was thinking the same thing, but you would have to watch out that
> you didn't introduce spurious roots: e.g.
>
> >>> (x/(x-1)).diff(x)
>
> -1/(1 - x) - x/(1 - x)**2>>> n,d=_.as_numer_denom();n/d
>
> (-x*(1 - x) - (1 - x)**2)/(1 - x)**3
>
> Unless we cancel out the common factors (something that isn't working
> in general yet but will hopefully work with the new polys module) we
> will get a root of x=1.

It should be easy to check for false roots numerically, but yeah, it
has to be implemented.

> I was also wondering if the routine shouldn't
> try to solve the expression symbolically to see if it's possible
> before trying to solve it numerically...or should sympy just trust the
> user and not waste the time checking symbolic solutions if a numeric
> solution has been requested?

It would do it this way: The user wants to solve() an equation, if we
can't do it symbolically, we return an instance of RootOf(), which can
be evalf()'ed to get a numerical approximation. nsolve() should solve
it always numerically, possibly doing some symbolic optimizations,
which can be disabled.

BTW: It's now fixed to find the root of 1/(0.001+a)**3-6/(0.9-a)**3.

Vinzent

Vinzent

New2Sympy

unread,
Aug 11, 2009, 8:14:01 AM8/11/09
to sympy
I think I am in agreement with Vinzent, this would probably solve the
issue I am having now...

I am trying to obtain the solutions using the linex below. When I
solve for the constants assigned to the variables, I get complex
solutions with very small imaginary components, that should be real
solutions. I assume this happens due to some precision settings. And
when I try to solve it symbolically, it hangs....

The equation to solve is the numerator of function I got using the
_.as_numer_denom()

>>> from sympy import var,solve
>>> #where 'a', 'e', 'c', 'x', 'o', 'v', 'b', 's', 'w', 'd', 'f' are constants, and want to solve it for 'r'
>>> a=5.75
>>> e=8e-6
>>> c=0.25
>>> x=20.
>>> o=0.02
>>> v=0.005
>>> b=0.12
>>> s=0.23
>>> w=0.23
>>> d=0.
>>> f=20.
>>>
>>> var('r')
r
>>>
>>> myEqConstants=-4*a**2*r**2*e**2*(0.002*c*x*(o + v) + 2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2)**2 -4*a**2*r**2*e**2*(0.002*c*x*(o + v) - (2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2))**2 +16*a**4*d**2*r**4*e**4 + 16*a**4*r**4*f**2*e**4
>>>
>>> mySols=solve(myEqConstants,r)
>>> print [aa.evalf() for aa in mySols]
[27.1235615040284 - 2.75e-13*I, -29.4436689512413 - 2.53e-13*I,
0.192368597088186 + 2.7527e-13*I, 0, -0.192261149875308 +
2.5278e-13*I]
>>>
>>> var('a r e c x o v b s w d f')
(a, r, e, c, x, o, v, b, s, w, d, f)
>>>
>>> myEqSymbolic=-4*a**2*r**2*e**2*(0.002*c*x*(o + v) + 2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2)**2-4*a**2*r**2*e**2*(0.002*c*x*(o + v) - (2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2))**2+16*a**4*d**2*r**4*e**4 + 16*a**4*r**4*f**2*e**4
>>>
>>> mySols=solve(myEqSymbolic,r)


Thanks for your help!

regards, anartz


On Aug 6, 6:26 pm, Vinzent Steinberg
> be evalf()'ed to get a numerical approximation.nsolve() should solve

Vinzent Steinberg

unread,
Aug 11, 2009, 4:58:53 PM8/11/09
to sympy
There are three variants to solve this using sympy, and you chose the
slowest one. ;)

Why do you want to solve it symbolically? It's a polynomial of 6th
degree, so there are probably very complex symbolic solution, if any.

The best way to get the solutions is using mpmath.polyroots, see [1].

Just run the following script:

from sympy import *
a = 5.75
e = 8e-6
c = 0.25
x = 20.
o = 0.02
v = 0.005
b = 0.12
s = 0.23
w = 0.23
d = 0.
f = 20.
r = Symbol('r')

myEqConstants = (-4*a**2*r**2*e**2*(0.002*c*x*(o + v) + 2*a*b*r*e +
2*a*s*r*e
+ 2*a*w*r*e + a*e*r**2)**2 -4*a**2*r**2*e**2*(0.002*c*x*(o + v)
- (2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2))**2
+ 16*a**4*d**2*r**4*e**4 + 16*a**4*r**4*f**2*e**4)
pretty_print(myEqConstants.expand())
print
print 'first variant:'
# we can solve it numerically, using a starting point
print nsolve(myEqConstants, r, -30)
print nsolve(myEqConstants, r, 0)
print nsolve(myEqConstants, r, 30)
print
print 'second variant:'
# mpmath has a solver for polynomials, but we have to convert it to a
list of
# coefficients (please not that the results are not very accurate, you
can refine
# them using an iterative solver)
mypoly = Poly(myEqConstants.expand(), r)
print mpmath.polyroots(mypoly.coeffs)
print
print 'third variant:'
# solve it symbolically
pretty_print([e.evalf(50) for e in solve(myEqConstants, r)]) # this is
slow


I get this output:

2 4
5 6
- 1.058e-15⋅r + 2.86075194816512e-14⋅r - 8.310158336e-17⋅r -
3.5819648e-17⋅r

first variant:
-29.4436689512432
0.0
27.1235615040264

second variant:
[mpf('-29.461988293519347'), mpf('0.036987320809019009'), mpf
('27.105000972710329')]

third variant:
[27.123561504028409502282500262935396379311391281644 -
2.74926057288475386433396561255
12261631e-13⋅ⅈ, 0.19236859708818558635612456726563255832985519147261 +
2.7526569045186
83537962579904305246576705e-13⋅ⅈ,
-0.1922611498753081213912536102857789787529633972320
1 + 2.527748870415793485879011339093847593146e-13⋅ⅈ,
-29.44366895124128636328604582383
0091968432655634478 - 2.5311452020497231595076256308478680067e-13⋅ⅈ,
0]


Indeed it's strange that the imaginary parts don't vanish, even if you
use higher precision for evaluating. This smells like a bug (assuming
the roots are really real). Fredrik, what do you think?
Please note that the second variant is somewhat inaccurate, you can
however use it as starting points for the first variant (see
documentation).

Vinzent


[1] http://mpmath.googlecode.com/svn/trunk/doc/build/calculus/polynomials.html#polynomial-roots-polyroots

New2Sympy

unread,
Aug 12, 2009, 7:20:49 AM8/12/09
to sympy
I did use the nsolve before (that was the topic of my first post :)),
but I need to find all the solutions in a range, and solutions can be
very close to each other depending on the constants. I am trying to
optimize a problem, so I would like to check the final outcome for all
valid solutions. I thought about symbolically solving the equation and
then .evalf() all solutions, to make sure I had them all, but ended up
with the extrange imaginary parts. Speed would not be that much of a
issue in this case... yet :)

At the end, I added a loop to the script to search solutions using the
nsolve function and initial points that range from the minimum to
maximum possible (phisical) solutions, with a fine stepsize. I append
the solutions to a list as they come if they have not been already
found with a different initial point.

Regards, anartz


On Aug 11, 10:58 pm, Vinzent Steinberg
> [1]http://mpmath.googlecode.com/svn/trunk/doc/build/calculus/polynomials...

Fredrik Johansson

unread,
Aug 12, 2009, 7:24:56 AM8/12/09
to sy...@googlegroups.com
On Tue, Aug 11, 2009 at 10:58 PM, Vinzent
Steinberg<vinzent....@googlemail.com> wrote:
>
> # mpmath has a solver for polynomials, but we have to convert it to a
> list of
> # coefficients (please not that the results are not very accurate, you
> can refine
> # them using an iterative solver)

polyroots should give full accuracy unless there are repeated roots.

> Indeed it's strange that the imaginary parts don't vanish, even if you
> use higher precision for evaluating. This smells like a bug (assuming
> the roots are really real). Fredrik, what do you think?
> Please note that the second variant is somewhat inaccurate, you can
> however use it as starting points for the first variant (see
> documentation).

It should work if you pass myEqConstants.n(50) instead of
myEqConstants as input. I think it assumes that the input is only
15-digit accurate.

Fredrik

Anartz Unamuno

unread,
Aug 12, 2009, 7:48:54 AM8/12/09
to sy...@googlegroups.com
Below is the outcome I get when I use myEqConstants.n(100):

third variant:
[-0.192261149873352 + .0e-21⋅ⅈ, -29.4436689512432 + .0e-19⋅ⅈ, 27.1235615040264 - .0e-19⋅ⅈ, 0, 0.192368597090154 - .0e-21
⋅ⅈ]

the imaginary parts are zeros, but they are still in the solution. Should they not disappear?

Thanks

Regars, anartz

smichr

unread,
Aug 12, 2009, 6:12:56 PM8/12/09
to sympy


On Aug 12, 4:48 pm, Anartz Unamuno <anartz.li...@gmail.com> wrote:
> Below is the outcome I get when I use myEqConstants.n(100):
>
> third variant:
> [-0.192261149873352 + .0e-21⋅ⅈ, -29.4436689512432 + .0e-19⋅ⅈ,
> 27.1235615040264 - .0e-19⋅ⅈ, 0, 0.192368597090154 - .0e-21
> ⋅ⅈ]
>
> the imaginary parts are zeros, but they are still in the solution. Should
> they not disappear?


When you have a solution with a quartic, there are parts of the
solution that have I*sqrt(3) in them. If you are careful (I suppose)
you should be able to get those to cancel out and end up with the real
answer only. When you start evaluating, though, when that term is
encountered, it adds in to the rest of the calculation as an imaginary
component (with some uncertainty) and you end up with that "almost
zero" artifact when you are done. Try the "chop=True" argument which
works most of the time:

>>> for s in mySols:
... print N(s,n=4,chop=True)
...
27.12
-1.923e-1
1.924e-1
-29.44
0

As for the symbolic approach, that seems good (especially if you just
want to change a few parameters) but I would recommend that even
though sympy *can* handle symbols, you do a little work to make it not
have to handle so many...at least until sympy is modified to know when
to ignore clusters of constants. Let's use the simpler manipulations
to look at your eq:

>>> myeq=(-4*a**2*r**2*e**2*(0.002*c*x*(o + v) + 2*a*b*r*e + 2*a*s*r*e + 2*a*w*r
*e + a*e*r**2)**2-4*a**2*r**2*e**2*(0.002*c*x*(o + v) - (2*a*b*r*e +
2*a*s*r*e +
2*a*w*r*e + a*e*r**2))**2+16*a**4*d**2*r**4*e**4 +
16*a**4*r**4*f**2*e**4 )

OK, let's change it to a poly so we can see the order clearly.
Although we could use expand, there's a nice method to pull off
coefficients:

>>> Poly(myeq,r)
Poly(-8*a**4*e**4*r**6 + (-32*b*a**4*e**4 - 32*s*a**4*e**4 -
32*w*a**4*e**4)*r**
5 + (-32*a**4*e**4*s**2 - 32*a**4*e**4*w**2 + 16*a**4*e**4*f**2 -
32*a**4*b**2*e
**4 + 16*a**4*d**2*e**4 - 64*b*s*a**4*e**4 - 64*b*w*a**4*e**4 -
64*s*w*a**4*e**4
)*r**4 + (-3.2e-5*a**2*c**2*e**2*o**2*x**2 -
3.2e-5*a**2*c**2*e**2*v**2*x**2 - 6
.4e-5*o*v*a**2*c**2*e**2*x**2)*r**2, r)

Now let's look at the coefficients in "monic" form (reducing the
leading term's coeff to 1):

>>> for xi in list(mypoly.as_monic().iter_all_coeffs()):
... print xi
...
1
4*b + 4*s + 4*w
8*b*s + 8*b*w + 8*s*w - 2*d**2 - 2*f**2 + 4*b**2 + 4*s**2 + 4*w**2
0
(3.2e-5*c**2*o**2*x**2 + 3.2e-5*c**2*v**2*x**2 + 6.4e-5*o*v*c**2*x**2)/
(8*a**2*e
**2)
0
0

Well that's interesting, it's got a 6th order term (there are 7
coeffs) but the constant and linear are 0 so you can factor out r**2
to get r**2*(r**4 + A*r**3 + B*r**2 + C); the linear term is also 0.
We'll capture the constants now for use later:

>>> one,AA,BB,z,CC,z,z = mypoly.as_monic().iter_all_coeffs()

So let's solve this (in an eyeblink, compared to the minutes it took
with the full form) and use cse on the results:


>>> solns = solve(x**4+A*x**3+B*x**2+C,x,simplified=False)
>>> for root in solns:
... rep,eq = cse(root)
... print rep
... print eq
... print
...
[(x0, A**2), (x1, 3*x0/256), (x2, B/16), (x3, x1 - x2), (x4, 32*x1),
(x5, B - x4
), (x6, x4/3), (x7, 8*x2), (x8, x6 - x7), (x9, x8**2), (x10, x5**2),
(x11, 3**(1
/2)), (x12, -64*x3*x6), (x13, 8*C), (x14, x12 + x13), (x15, -x13/4),
(x16, -x12/
4), (x17, x10 + x15 + x16), (x18, x14*x17), (x19, x10**(3/2)), (x20,
72*x6*x9),
(x21, x19 + x20), (x22, -x21*x5), (x23, x18 + x22), (x24, -8*x15),
(x25, -8*x16)
, (x26, x24 + x25), (x27, x23*x26), (x28, 3*x20), (x29, 8*x7), (x30,
12*x6), (x3
1, x29 - x30), (x32, x10*x31), (x33, x28 + x32), (x34, x28*x33/27),
(x35, x27 +
x34), (x36, x35**(1/2)), (x37, x11*x36/18), (x38, -x25/24), (x39, -
x24/24), (x40
, x10/54), (x41, x38 + x39 + x40), (x42, x29/2), (x43, x30/2), (x44,
x42 - x43),
(x45, x41*x44), (x46, x28/54), (x47, x37 + x45 + x46), (x48, x47**
(-1/3)), (x49
, x48**(-2)), (x50, 9*x49), (x51, -18*x39), (x52, -18*x38), (x53, x49**
(1/2)), (
x54, 3*x43), (x55, -3*x42), (x56, x54 + x55), (x57, x53*x56), (x58,
x10 + x50 +
x51 + x52 + x57), (x59, x48*x58), (x60, x59**(1/2))]
[-A/4 - x60/6 + (x48*(-x60*(x10 + x50 + x51 + x52 + x53*(-2*x54 -
2*x55)) + 54*A
*x53*x8))**(1/2)/(6*x60**(1/2))]

etc...

Since you have only 1 equation, you access it from eq as eq[0]. Those
reps are useful in their reversed order because you can feed them
directly to .subs()--again, in reversed order--and you will obtain
your original equation back.

So if you are going to be making some changes to parameters you could
do something like the following:

#do this once so it's always ready
>>> rep = list(reversed(rep))
>>> myrep=[(A,AA),(B,BB),(C,CC)] #AA etc...were captured previously

#define your constants
>>> constants={a:5.75,e:8e-6,c:0.25,x:20.,o:0.02,v:0.005,b:0.12,s:0.23,w:0.23,d:0.,f:20.,}

#update A, B, C
>>> abc = [(tmp[0], tmp[1].subs(constants)) for tmp in myrep]

#calculate your solution
>>> bigE = eq.subs(rep + abc)
>>> bigE
-0.58 - ((638203.292983406 + 4804.0368*(-18835904.2246641 +
3211610.35444874*I*3
**(1/2))**(1/3) + 9*(-18835904.2246641 + 3211610.35444874*I*3**(1/2))**
(2/3))/(-
18835904.2246641 + 3211610.35444874*I*3**(1/2))**(1/3))**(1/2)/6 + I*
((((638203.
292983406 + 4804.0368*(-18835904.2246641 + 3211610.35444874*I*3**(1/2))
**(1/3) +
9*(-18835904.2246641 + 3211610.35444874*I*3**(1/2))**(2/3))/
(-18835904.2246641
+ 3211610.35444874*I*3**(1/2))**(1/3))**(1/2)*(638203.292983406 -
9608.0736*(-18
835904.2246641 + 3211610.35444874*I*3**(1/2))**(1/3) + 9*
(-18835904.2246641 + 32
11610.35444874*I*3**(1/2))**(2/3)) - 50112.0*(-18835904.2246641 +
3211610.354448
74*I*3**(1/2))**(1/3))/(((638203.292983406 + 4804.0368*
(-18835904.2246641 + 3211
610.35444874*I*3**(1/2))**(1/3) + 9*(-18835904.2246641 +
3211610.35444874*I*3**(
1/2))**(2/3))/(-18835904.2246641 + 3211610.35444874*I*3**(1/2))**(1/3))
**(1/2)*(
-18835904.2246641 + 3211610.35444874*I*3**(1/2))**(1/3)))**(1/2)/6
>>> N(_)
-29.4350345287009 + 0.00248878631934675*I

And that was for the last of the 4 roots. {hmmm...that imaginary part
seems too big...I wonder if there's something lurking in there still.}
BUT...anyway, cse is your friend if you are doing repeated
calculation, and doing some human intelligent simplification (using
CAS to help you) is a "very good thing": the solution process goes
down from minutes (or hangs) to seconds.

Best regards,
/c

Christopher Smith

New2Sympy

unread,
Aug 14, 2009, 6:37:53 AM8/14/09
to sympy
Hi again,

I am getting confused now. See the script and the results below.

First Variant (using nsolve) finds some extra solutions, that probably
would be gone changed the tolerance, but using the option tol=1e-25
crashes, in fact, I could not go beyond tol=1e-21.

Second Variant (using Poly) kind of works, but there is a small fix
required. "mypoly.coeffs" ommits the zeros, therefore if the
coefficients for x^0, x^1 and x^3 are 0, they would not appear. Is
this a bug or a feature?

Third Variant (using solve) is completely messed now, solutions don't
get even close to the real solutions.

I plotted the function in a spreadsheet, and found that the solutions
provided by the Second Variant (with the manual fix) are the correct
ones.

Any ideas on how to fix all this to get the same results from all
methods? how can I believe any of the results otherwise?

Thanks!

regards, anartz

--------------------------------
-- The script --
--------------------------------

from sympy import *
from numpy import arange

a = 5.75
e = 8e-6
c = 0.5
x = 20.
o = 0.02
v = 0.005
b = 0.12
s = 0.23
w = 0.23
d = 0.
f = 8.
r = Symbol('r')

myEqConstants = (-4*a**2*r**2*e**2*(0.002*c*x*(o + v) + 2*a*b*r*e +
2*a*s*r*e + 2*a*w*r*e + a*e*r**2)**2 -4*a**2*r**2*e**2*(0.002*c*x*(o +
v) - (2*a*b*r*e + 2*a*s*r*e + 2*a*w*r*e + a*e*r**2))**2 +
16*a**4*d**2*r**4*e**4 + 16*a**4*r**4*f**2*e**4)

print "Equation to solve is: "
pretty_print(myEqConstants.expand())
print
print 'first variant:'
print "Solutions using nsolve function in the range are:"

myRoot=[]
for aa in arange(-20.,20.,0.1):
NewRoot=nsolve(myEqConstants, r, aa, tol=1e-20)
if ((str(NewRoot) not in myRoot) and (NewRoot>0.0001 or
NewRoot<-0.0001)): # don't get loads of roots near 0, also excludes 0
myRoot.append(str(NewRoot))

for aa in myRoot:
print aa

print

print 'Second Variant:'
mypoly = Poly(myEqConstants.expand(), r)
print "mypoly.coeffs:" + str(mypoly.coeffs)
print "instead of: (-3.58196480000000e-17, -8.31015833600000e-17,
4.53671602565120e-15, 0, -4.23200000000000e-15,0,0)"

print
print "Using correct coefficients:"
for aa in mpmath.polyroots((-3.58196480000000e-17,
-8.31015833600000e-17, 4.53671602565120e-15, 0,
-4.23200000000000e-15,0,0)):
print aa

print
print 'Third variant:'
print "Solutions obained using solve function:"
NewSols=solve(myEqConstants, r)
TheSols=[N(e, n=4, chop=True) for e in NewSols]

for aa in TheSols:
pprint(aa)

--------------------------------
-- End Script --
--------------------------------


--------------------------------
-- Results --
--------------------------------

Equation to solve is:
2 4
5 6
- 4.232e-15⋅r + 4.5367160256512e-15⋅r - 8.310158336e-17⋅r -
3.5819648e-17⋅r

first variant:
Solutions using nsolve function in the range are:
-12.4399173136662
-10.304955620728
-10.151265495134
0.978377565046911
-0.960891724232247
8.71027033335483
10.1024314728516

Second Variant:
mypoly.coeffs:(-3.58196480000000e-17, -8.31015833600000e-17,
4.53671602565120e-15, -4.23200000000000e-15)
instead of: (-3.58196480000000e-17, -8.31015833600000e-17,
4.53671602565120e-15, 0, -4.23200000000000e-15,0,0)

Using correct coefficients:
-12.4399173136662
-0.960891724232247
-9.52592891648496e-19
5.88734784480316e-19
0.978377565046911
10.1024314728516

Third variant:
Solutions obained using solve function:
-7.834e-1
-12.62
9.870
0
1.211
--------------------------------
-- End Results --
--------------------------------

smichr

unread,
Aug 15, 2009, 9:49:56 AM8/15/09
to sympy


On Aug 14, 3:37 pm, New2Sympy <anartz.li...@gmail.com> wrote:
> Hi again,
>
> I am getting confused now. See the script and the results below.
>
You are running into numerical issues (underflow, I belileve) because
of scaling issues. If you don't normalize your polynomial then the the
coefficients of the polynomial are on the order of 1e-15. Since your
roots are on the order of 1 that means that if you substitute 1 into
your polynomial you will get something very close to zero:

>>> mypoly=myEqConstants.as_poly(r)
>>> mypoly.subs(r,1)
1.85794794291198e-16

So if you are using a numerical solver and you say that if the
function gets closer than 1e-15 to the x-axis that that represents a
root for you. Well...any function multiplied by a small enough factor
will give you something close to zero! :-) Let sympy help you get this
into normalized form:

>>> mymonic=mypoly.as_monic();mymonic
Poly(1.0*r**6 + 2.32*r**5 - 126.6544*r**4 + 118.147448015123*r**2, r)
>>> mymonic.subs(r,1)
-5.18695198487708

OHH...so it's not so close to zero after all. Your root numerical
solver is going to do a lot better for you now:

first variant with monic:
Solutions using nsolve function in the range are:
-12.4399173136662
0.978377565046911
-0.960891724232247
10.1024314728516


> Second Variant (using Poly) kind of works, but there is a small fix
> required. "mypoly.coeffs" ommits the zeros, therefore if the
> coefficients for x^0, x^1 and x^3 are 0, they would not appear. Is
> this a bug or a feature?

Feature...it only gives the coeffs of terms that are there, not those
that aren't. That way if you are working with x**100 - 1 you don't get
a lot of zeros. It is incovenient to type these in by hand for mpmath,
however, so use sympy like this to do it for you:

>>> #get the dictionary of monomials and coefficients
>>> cod=mymonic.as_dict()
>>> cod
{(2,): 118.147448015123, (5,): 2.32000000000000, (6,):
1.00000000000000, (4,): -126.654400000000}

Now there must be a better way to do this, but not knowing it we can
construct a list of monomials from 6 to 0 asking d to give them to us.
But what do we do if a monomial is not there? Use the ability to have
the dictionary give you a default value:

>>> cod = [d.get((j,),0) for j in reversed(range(0,po.LM[0]+1))]
^ ^ ^
| | +- that's the
leading monomial, (6,)
| |
| _we need to get them in 6 to 0
order
|
get that monomial, but if it's not there, get 0
>>> cod
[1.00000000000000, 2.32000000000000, -126.654400000000, 0,
118.147448015123, 0, 0]
>>>

> Third Variant (using solve) is completely messed now, solutions don't
> get even close to the real solutions.
>

feeding the "basic" form of the MONIC poly (an expression rather than
a Poly) into solve with quartics=True takes a long time but finally
gives:

>>> ans = solve(mymonic.as_basic(), r, simplified=False, quartics=True)
>>>

> I plotted the function in a spreadsheet, and found that the solutions
> provided by the Second Variant (with the manual fix) are the correct
> ones.
>
I plotted it, too, but without the monic form I just got a graph that
looked like it didn't go through the zero axis at all. This
underscores the importance of working with a monic form of the
polynomial.

> Any ideas on how to fix all this to get the same results from all
> methods? how can I believe any of the results otherwise?
>

Every method is doing the best it can:
1) you gave it something that was so close to zero everywhere,
especially in the minima region around 0 that it looked like there
were several possibilities that were within the computational limit
and within your specified tolerance. Give it something that doesn't
look so close to zero and you will get results consistent with your
expectation: give it the monic form.

2) poly (I'm guessing) probably did some normalizing for you and found
the roots as you expected.

3) the current quartic routine which would be needed, I believe, to
solve your equation, is broken and is awaiting an update. It's doing
the best it can :-) Using the version that is awaiting review and
patching into the current sympy (by others who are also doing the best
they can with a hundred+ chnages to review and add into sympy) gives
the following results:

-12.4569900815341 - 0.0126318166896176*I
-0.719450735477597 + 0.239542505387883*I
0
0.72799186899517 - 0.246077177232919*I
10.1284489480165 + 0.0191664885346538*I

Those imaginary parts are too large for my liking and I'm looking into
why that is so, but I have a couple of issues that I am tracking down
right now that might be the solution.

An introductory numerical methods book would be a great help to you if
you are interested in numerical procedures...Burden and Faires
(Numerical Methods) is one I used, but a basic Chemical Engineering
first-year book for numerical methods would be great, too:

At http://tinyurl.com/kt486x there are some amazon books:

I taught from Numerical Methods for Engineers by Steven C. Chapra and
Raymond P. Canale and this book had lots of sample code and
discussions of a wide variety of basic numerical procedures. They
teach about the pitfalls like you are running into.

Burden and Faires is on the list, too.

An analysis of numerical methods is described at http://tinyurl.com/p53bad
and that might have some good discussions about why methods fail (and
I didn't make the url end in 'bad'!). If you understand what the
methods are trying to do and how they do it, you can help them do
their best, using your intelligence until the artificial intelligence
of the CAS catches up. But even that raises an issue...if you say you
want to solve an equation close to the computational representation of
the computer, should the CAS not allow you to do it? Maybe you really
*do* want to see how it fails. So that's why sometime there are
higher-level function calls (like solve) and lower-level calls (like
to mpmath) to do the work for you.

smichr

unread,
Aug 15, 2009, 8:04:28 PM8/15/09
to sympy
> > Second Variant (using Poly) kind of works, but there is a small fix
> > required. "mypoly.coeffs" ommits the zeros, therefore if the
> > coefficients for x^0, x^1 and x^3 are 0, they would not appear. Is
> > this a bug or a feature?
[cut]

> Now there must be a better way to do this [...to get the coefficients of a polynomial]

In interactive mode, if you do

>>> help(mypoly)

you will receive the documentation about polynomials included in
sympy. If you browse through that you will see there is a method
called .iter_all_coeffs() and that is the one that you want to get ALL
the coefficients, even the zeros:

>>> list(mymonic.iter_all_coeffs())
[1, 2.32000000000000, -126.654400000000, 0, 118.147448015123, 0, 0]

Right now, if you try to pass this directly to polyroots it will
complain because you are sending sympy numbers, but if you change them
all to floats first it works. Here it is step by step:

When you use the iter_all_coeffs() method, you get a generator...
>>> mymonic.iter_all_coeffs()
<generator object iter_all_coeffs at 0x01FF3968>

but polyroots needs a real list of numbers not the thing that can
*make* the list so...
>>> list(_)
[1, 2.32000000000000, -126.654400000000, 0, 118.147448015123, 0, 0]

Now, if you *copied* and pasted those numbers they would work, but if
you tried polyroots(list(mymonic.iter_all_coeffs())) it would fail
because then you are sending the *sympy numbers* directly and right
now polyroots is not ready to handle that. The solution is to turn
them into floating point numbers first:

>>> [float(tmp) for tmp in mo.iter_all_coeffs()]
[1.0, 2.3199999999999998, -126.6544, 0.0, 118.147448015123, 0.0, 0.0]

Putting it all together gives...
>>> mpmath.polyroots([float(tmp) for tmp in mymonic.iter_all_coeffs()])
[mpf('-12.439917313666234'), mpf('-0.96089172423224767'), mpf
('-9.5259289164838849e-19'), mpf('5.8873478448024993e-19'), mpf
('0.97837756504691176'), mpf('10.102431472851569')]

/c

New2Sympy

unread,
Aug 17, 2009, 4:43:33 AM8/17/09
to sympy
Thanks! This puts me on the road again :)

That is a great tutorial... I looked at the all_coeffs() option
before, but could get it to work, probably because of the data types
you explain.

I guess it's time that I take a look at the numerical methods courses
from my college years.

Thanks.

Regards, anartz

Vinzent Steinberg

unread,
Aug 18, 2009, 4:32:38 AM8/18/09
to sympy
On Aug 17, 10:43 am, New2Sympy <anartz.li...@gmail.com> wrote:
> I looked at the all_coeffs() option
> before, but could get it to work, probably because of the data types
> you explain.

It should be fixed in current sympy HEAD.

Vinzent
Reply all
Reply to author
Forward
0 new messages