A laugh

5 views
Skip to first unread message

kcrisman

unread,
Feb 11, 2009, 12:45:11 PM2/11/09
to sage-devel
There are of course several trac tickets related to this, so this is
not a bug report (for Sage or for Maxima), but I had to laugh when
this came up today in preparing for class - enjoy!

TypeError: Computation failed since Maxima requested additional
constraints (try the command 'assume(sin(t)^2+cos(t)^2>0)' before
integral or limit evaluation, for example):
Is sin(t)^2+cos(t)^2 positive or zero?

- kcrisman

mabshoff

unread,
Feb 11, 2009, 2:11:28 PM2/11/09
to sage-devel


On Feb 11, 9:45 am, kcrisman <kcris...@gmail.com> wrote:

Hi,

> There are of course several trac tickets related to this, so this is
> not a bug report (for Sage or for Maxima), but I had to laugh when
> this came up today in preparing for class - enjoy!

Well, it would be truly funny if we didn't use Maxima for symbolics,
but this is a sad, sad bug for a 30 year old system.

> TypeError: Computation failed since Maxima requested additional
> constraints (try the command 'assume(sin(t)^2+cos(t)^2>0)' before
> integral or limit evaluation, for example):
> Is  sin(t)^2+cos(t)^2  positive or zero?
>
> - kcrisman

Cheers,

Michael

rjf

unread,
Feb 12, 2009, 1:23:02 AM2/12/09
to sage-devel
I don't understand this.
What command was sent to Maxima?
What bug are you referring to?
Perhaps that Maxima does not have an algorithm that you think it
should have?

Are you aware of the results of Daniel Richardson on the recursive
undecidability of
(rather simple) identities? He proved that in general there is no
algorithm possible.

Especially given that as background,

How much work do you think Maxima should do to try to determine for
arbitrary f, if f(x)>0 or not?

Now Maxima does know, in many different contexts, that sin^2+cos^2 can
be simplified to 1
But looking for all such relationships that it is aware of (and there
are many such relationships),
at every decision point, is time consuming.

Mathematica has a function which tries searching for smaller
equivalent expressions.
Simplify, or maybe FullSimplify.

The difficulty is that the program tends to take too much time for any
but rather small expressions
to start with. Such a program could presumably be written in Sage,
where each subexpression
is repeatedly submitted to 15 or 20 or 30 different "simplifier-like"
programs to see which
equivalent expression is smaller. This is not a great idea.

But if you really really want to make sure that Maxima always knows
that sin^2+cos^2=1, you can consider
wrapping trigsimp() around every expression that you send to it.



By the way, Macsyma is not 30 years old. The first paper describing
it dates back to 1967.
So it is 42 years old or more.

Richardson's results date to about 1968.

RJF

boo...@u.washington.edu

unread,
Feb 12, 2009, 2:35:23 AM2/12/09
to sage-devel
> Are you aware of the results of Daniel Richardson on the recursive
> undecidability of
> (rather simple) identities? He proved that in general there is no
> algorithm possible.

Yeah, and the halting problem is undecidable too, but you would still call the following program "stupid":

while 1: continue

There are a huge number of intractable problems. Determining whether sin^2 x + cos^2 x = 1 is not one of them.


> Especially given that as background,
>
> How much work do you think Maxima should do to try to determine for
> arbitrary f, if f(x)>0 or not?

Enough that it doesn't ask the user obviously stupid questions.

> Now Maxima does know, in many different contexts, that sin^2+cos^2 can
> be simplified to 1
> But looking for all such relationships that it is aware of (and there
> are many such relationships),
> at every decision point, is time consuming.

Yes, but I'd bet that Maxima could answer 99% of such questions faster than I could find a piece of paper. It's great that Maxima is "aware" of many relationships. It'd be awesome if it would effectively wield them.

> Mathematica has a function which tries searching for smaller
> equivalent expressions.
> Simplify, or maybe FullSimplify.
>
> The difficulty is that the program tends to take too much time for any
> but rather small expressions
> to start with.

Ah! So the problem is that Maxima is slow. I complain about that a lot, actually.

> Such a program could presumably be written in Sage,
> where each subexpression
> is repeatedly submitted to 15 or 20 or 30 different "simplifier-like"
> programs to see which
> equivalent expression is smaller. This is not a great idea.

No, sending 20-30 requests to Maxima is not a great idea. Fewer is better, usually.

> But if you really really want to make sure that Maxima always knows
> that sin^2+cos^2=1, you can consider
> wrapping trigsimp() around every expression that you send to it.
>
>
>
> By the way, Macsyma is not 30 years old. The first paper describing
> it dates back to 1967.
> So it is 42 years old or more.

Neat! In a few years, it'll be an antique! I used to own a 1969 Ford 150. Nothing like old iron -- but the damned thing burned a gallon of gasoline in under 6 miles on a straight, flat road in a tailwind! Out with the old, in with the new -- my '05 pickup gets 25 miles to the gallon in hilly city conditions.

> Richardson's results date to about 1968.

Pythagoras's result dates back to about 500BC. Have you heard of it?

Simon King

unread,
Feb 12, 2009, 3:27:09 AM2/12/09
to sage-devel
On Feb 12, 7:23 am, rjf <fate...@gmail.com> wrote:
> How much work do you think Maxima should do to try to determine for
> arbitrary f, if f(x)>0 or not?

Perhaps the same amount of work as for the successful solution of the
following two problems:
sage: bool((sin(x)^2+cos(x)^2).simplify_full()>0)
True
sage: bool(sin(x)^2+cos(x)^2==1)
True

Cheers,
Simon

kcrisman

unread,
Feb 12, 2009, 10:50:35 AM2/12/09
to sage-devel
Oh, I don't think this is as much of a bug as people think - rjf was
quite wise to ask what my command was!

sage: t=var('t')
sage: sqrt((-m*sin(m*t))^2+(n*cos(n*t))^2).nintegral(x,0,2*pi)

where m, n were determined in an interact. But I used the wrong
variable in nintegral! In addition,

sage: t=var('t')
sage: assume(sin(t)^2 + cos(t)^2 > 0)
sage: sqrt((sin(t))^2+(cos(t))^2).nintegral(x,0,2*pi)
Traceback (click to the left for traceback)
...
ValueError: Maxima (via quadpack) cannot compute the integral to that
precision

which seems okay, though no other natural "assume" command got me
there. Now, perhaps it should still be smarter than this following
example:

sage: t=var('t')
sage: forget()
sage: assume(t==pi/2)
sage: sqrt((-2*sin(2*t))^2+(3*cos(3*t))^2).nintegral(x,0,2*pi)
Traceback (click to the left for traceback)
...
Is 9*cos(3*t)^2+4*sin(2*t)^2 positive or zero?

But at least it's asking the right question, since a numerical
integral is indeed possible here if t=pi/2 and so the integrand is
zero. And this works:

sage: t=var('t')
sage: forget()
sage: sqrt((-2*sin(2*t))^2+(3*cos(3*t))^2).nintegral(t,0,2*pi)[0]
15.209210627602969


Still, I suppose that it would seem natural to check for the most
common things of this kind like sin^2+cos^2. Even WeBWorK, a Perl
homework checker, checks for this sort of thing in its (non-CAS-based)
algorithm.

At the very least we know Sage has its work cut out for it if it ever
wants to remove dependence on the slow-slow interface to Maxima and
Lisp issues, because these are (in general) very thorny questions.
Even if they're amusing on occasion!

- kcrisman

mabshoff

unread,
Feb 12, 2009, 11:01:47 AM2/12/09
to sage-devel


On Feb 12, 7:50 am, kcrisman <kcris...@gmail.com> wrote:

<SNIP>
.
> At the very least we know Sage has its work cut out for it if it ever
> wants to remove dependence on the slow-slow interface to Maxima and
> Lisp issues, because these are (in general) very thorny questions.
> Even if they're amusing on occasion!

Yes, getting to the point where Maxima is today will be a formidable
amount of work.

The issue I have with Maxima is that back in the day Macsyma was
designed under the assumption that someone would always be sitting in
front of the terminal and give feedback, i.e. is answer questions
whether something is "positive, zero or negative". This is highly
annoying and a major weakness IMHO, i.e. the latest maxima will ask
you about a when you run

limit(a*x,x,0);

Why? a's sign has nothing to do with the limit here. Maxima 5.16.3
will not ask you about the sign of a in the above limit. That is the
reason why we did not upgrade to Maxima 5.17.1 because it introduced
more cases where user interaction was required. If one wants to run
Sage computations involving Maxima one does not want to answer
questions. On top of that the assumption system in Maxima is woefully
under documented, i.e. stripped of comments, and only in the last
months or so Dieter Kaiser has started to fix long standing bugs in
it. I am always surprised given the age of the code base how many
issues are left in Maxima and how often code is broken. I.e. in 5.13
or 14 some limit involving exp(-x) was broken.

> - kcrisman

Cheers,

Michael

rjf

unread,
Feb 12, 2009, 4:57:40 PM2/12/09
to sage-devel


On Feb 11, 11:35 pm, boot...@u.washington.edu wrote:
> > Are you aware of the results of Daniel Richardson on the recursive
> > undecidability of
> > (rather simple) identities?  He proved that in general there is no
> > algorithm possible.
>
> Yeah, and the halting problem is undecidable too, but you would still call the following program "stupid":
>
> while 1: continue

Actually, a program like that is very useful. Stupidity is usually
considered a characteristic of humans, so
as a first cut, I would say that no program is "stupid".

>
> There are a huge number of intractable problems.  Determining whether sin^2 x + cos^2 x = 1 is not one of them.

No, it is not intractable. But the tools you use to apply such
identities or simplifications are not usually efficient,
since to be effective there is a combinatorial search to see which, if
any, of the identities can be effectively applied.
To say that one should search for, in particular, sin^2+cos^2 is to
assume that your audience consists largely
of high school students taking trigonometry, or calculus students
doing simple problems for homework.
While this may be (statistically speaking) probable for Sage, it is a
bad design assumption for someone writing
a program that is supposed to do serious applied mathematics, as an
assistant to someone who presumably
has some knowledge of the tool, and for example knows of the existence
of programs like "trigsimp" or "ratexpand"
or "factor" ...

>
> > Especially given that as background,
>
> > How much work do you think Maxima should do to try to determine  for
> > arbitrary f,  if f(x)>0 or not?
>
> Enough that it doesn't ask the user obviously stupid questions.

OK, here's your homework. Write a program to determine if a question
is "obviously stupid".
When you are finished, ask for a PhD.

>
> > Now Maxima does know, in many different contexts, that sin^2+cos^2 can
> > be simplified to 1
> > But looking for all such relationships that it is aware of (and there
> > are many such relationships),
> > at every decision point, is time consuming.
>
> Yes, but I'd bet that Maxima could answer 99% of such questions faster than I could find a piece of paper.

I don't know you, but I keep a pad of paper handy. Also a pencil.


> It's great that Maxima is "aware" of many relationships.  It'd be awesome if it would effectively wield them.

Again, this could be made the core of a homework assignment, but could
lead to a PhD.

>
> > Mathematica has a function which tries searching for smaller
> > equivalent expressions.
> > Simplify,  or maybe FullSimplify.
>
> > The difficulty is that the program tends to take too much time for any
> > but rather small expressions
> > to start with.
>
> Ah!  So the problem is that Maxima is slow.

No, there is no such program in Maxima. Perhaps you meant to say
"the problem is that Mathematica is slow"?

Even so, your conclusion would be wrong. Read up on "combinatorial
explosion".

 I complain about that a lot, actually.

Well, maybe are using the slow version of Maxima that is attached to
Sage, instead
of one of the fast versions?

>
> > Such a program could presumably be written in Sage,
> > where each subexpression
> > is repeatedly submitted to 15 or 20 or 30 different "simplifier-like"
> > programs to see which
> > equivalent expression is smaller.  This is not a great idea.
>
> No, sending 20-30 requests to Maxima is not a great idea.  Fewer is better, usually.

Perhaps what you are saying is "the problem is that Sage is too slow"?

rjf

unread,
Feb 12, 2009, 5:08:25 PM2/12/09
to sage-devel
If there is an algorithm for simplify_full(), then presumably it could
be programmed in Lisp, and incorporated in Maxima.

You are invited to do so.

I assume that there are examples for which it doesn't do what you
want, and so you could argue that it should do more work.
I also assume there are examples for which it works for too long and
does not provide any further simplification, and so you
could argue that it should do less work.

Here's a simplifier problem: simplify
1/16*(10*sin(x)-5*sin(3*x)+17*sin(5*x))

using commands in Sage.

The answer could have as little as 17 characters.

rjf

unread,
Feb 12, 2009, 5:25:31 PM2/12/09
to sage-devel


On Feb 12, 7:50 am, kcrisman <kcris...@gmail.com> wrote:
> Oh, I don't think this is as much of a bug as people think - rjf was
> quite wise to ask what my command was!
>
> sage: t=var('t')
> sage: sqrt((-m*sin(m*t))^2+(n*cos(n*t))^2).nintegral(x,0,2*pi)
>
> where m, n were determined in an interact.  But I used the wrong
> variable in nintegral!  In addition,

I see, you meant to integrate with respect to t, not x.

>
> sage: t=var('t')
> sage: assume(sin(t)^2 + cos(t)^2 > 0)
> sage: sqrt((sin(t))^2+(cos(t))^2).nintegral(x,0,2*pi)
> Traceback (click to the left for traceback)
> ...
> ValueError: Maxima (via quadpack) cannot compute the integral to that
> precision

This is bizarre. quadpack was written in Fortran. It was
automatically translated into Lisp and compiled, then loaded in to
Maxima. It is a purely numerical program. Sage calls Maxima to run
this?

>
> which seems okay, though no other natural "assume" command got me
> there.

Since quadpack is a purely numerical program, any information that
Maxima might have about assumptions is not conveyed to quadpack. I do
not expect this will change.



>  Now, perhaps it should still be smarter than this following
> example:
>
> sage: t=var('t')
> sage: forget()
> sage: assume(t==pi/2)
> sage: sqrt((-2*sin(2*t))^2+(3*cos(3*t))^2).nintegral(x,0,2*pi)
> Traceback (click to the left for traceback)
> ...
> Is  9*cos(3*t)^2+4*sin(2*t)^2  positive or zero?
>
> But at least it's asking the right question, since a numerical
> integral is indeed possible here if t=pi/2 and so the integrand is
> zero.  And this works:
>
> sage: t=var('t')
> sage: forget()
> sage: sqrt((-2*sin(2*t))^2+(3*cos(3*t))^2).nintegral(t,0,2*pi)[0]
> 15.209210627602969

This looks awfully clumsy.
I tried this in maxima:

romberg(sqrt((-2*sin(2*t))^2+(3*cos(3*t))^2),t,0,2*%pi);

and I immediately got this:

15.20920996328813

romberg is the name of a simple numerical integration program that is
included in Maxima.

Maybe you would have fewer problems (and more informative error
messages)
if you were just using Maxima.


>
> Still, I suppose that it would seem natural to check for the most
> common things of this kind like sin^2+cos^2.  Even WeBWorK, a Perl
> homework checker, checks for this sort of thing in its (non-CAS-based)
> algorithm.

There are all kinds of heuristics for doing homework. Unfortunately
some of them
don't work for real calculations. Here's a heuristic: No integer
has more than 3 digits.
It worked for me in a graduate course in complex variables. If you
compute an answer
and there is an integer with more than 3 digits, you made a mistake.


>
> At the very least we know Sage has its work cut out for it if it ever
> wants to remove dependence on the slow-slow interface to Maxima and
> Lisp issues, because these are (in general) very thorny questions.

The solution for you is to use Maxima, not Sage :)

> Even if they're amusing on occasion!

All in clean fun.
RJF

rjf

unread,
Feb 12, 2009, 5:37:00 PM2/12/09
to sage-devel


On Feb 12, 8:01 am, mabshoff <mabsh...@googlemail.com> wrote:
> On Feb 12, 7:50 am, kcrisman <kcris...@gmail.com> wrote:
>
> <SNIP>
> .
>
> > At the very least we know Sage has its work cut out for it if it ever
> > wants to remove dependence on the slow-slow interface to Maxima and
> > Lisp issues, because these are (in general) very thorny questions.
> > Even if they're amusing on occasion!
>
> Yes, getting to the point where Maxima is today will be a formidable
> amount of work.
>
> The issue I have with Maxima is that back in the day Macsyma was
> designed under the assumption that someone would always be sitting in
> front of the terminal and give feedback, i.e. is answer questions
> whether something is "positive, zero or negative".

Exploring alternatives to this is an active area of research, and has
been for at least 20 years.
There are several unsatisfactory alternatives.

> This is highly
> annoying and a major weakness IMHO, i.e. the latest maxima will ask
> you about a when you run
>
>   limit(a*x,x,0);
>
> Why? a's sign has nothing to do with the limit here. Maxima 5.16.3
> will not ask you about the sign of a in the above limit.

Presumably this is a bug, not a design issue.

> That is the
> reason why we did not upgrade to Maxima 5.17.1 because it introduced
> more cases where user interaction was required.

This behavior is presumably a bug. Rejecting all bug fixes in the
latest version of
Maxima is, perhaps, an error.



> If one wants to run
> Sage computations involving Maxima one does not want to answer
> questions.

I think this is wrong. I think you do not want to answer questions
that you
perceive to be unnecessary.
And no one argues with that.

There are questions that you probably DO
want to see. Like "I've run out of memory page space. I can continue
but
will run at disk speed, 10,000 times slower. Do you want to restart
with a
larger memory allocation?"

"If there is a possibility that <some expression> is zero, then some
<expensive extra computation>
should be done. Should I do so?"



>On top of that the assumption system in Maxima is woefully
> under documented, i.e. stripped of comments,

There is only one case in which an author removed comments (circa
1971)
from his code on the grounds that anyone looking at the code with
comments
might be tempted to change the code. So he removed his comments.
That code had to do with the assume database, which is now considered
to be a source of problems.

The lack of comments in other programs is just the way they were
written.


> and only in the last
> months or so Dieter Kaiser has started to fix long standing bugs in
> it. I am always surprised given the age of the code base how many
> issues are left in Maxima and how often code is broken. I.e. in 5.13
> or 14 some limit involving exp(-x) was broken.

It is unfortunate that new bugs are introduced. Open source is tough
that way.
As for how many issues are left in Maxima --- well, there are design
issues
that persist in every computer algebra system. Mathematics is like
that
sometimes.

>
> > - kcrisman
>
> Cheers,
>
> Michael

Alex Ghitza

unread,
Feb 12, 2009, 6:04:30 PM2/12/09
to sage-...@googlegroups.com
On Fri, Feb 13, 2009 at 2:50 AM, kcrisman <kcri...@gmail.com> wrote:

Still, I suppose that it would seem natural to check for the most
common things of this kind like sin^2+cos^2.  Even WeBWorK, a Perl
homework checker, checks for this sort of thing in its (non-CAS-based)
algorithm.


For the record, WeBWorK does not actually understand symbolic expressions, simplifications, etc.  So how does it check that the student's messed-up unsimplified symbolic answer is "the same" as the hard-coded correct symbolic answer to the question?  It evaluates both expressions numerically at a bunch of points (possibly randomly chosen, but I don't remember).  If they always differ by at most a specified tolerance, they're deemed to be the same.  Otherwise, the student's answer must have been wrong.

This works surprisingly well for WeBWorK's purposes (and definitely much better than some sleep-deprived upper classman grading such problems manually for lousy pay).  But I don't know how we can turn this into a reasonably-correct way of dealing with symbolics.  

Best,
Alex


--
Alex Ghitza -- Lecturer in Mathematics -- The University of Melbourne -- Australia -- http://www.ms.unimelb.edu.au/~aghitza/

kcrisman

unread,
Feb 12, 2009, 7:53:45 PM2/12/09
to sage-devel
> For the record, WeBWorK does not actually understand symbolic expressions,
> simplifications, etc.  So how does it check that the student's messed-up
> unsimplified symbolic answer is "the same" as the hard-coded correct
> symbolic answer to the question?  It evaluates both expressions numerically
> at a bunch of points (possibly randomly chosen, but I don't remember).  If
> they always differ by at most a specified tolerance, they're deemed to be
> the same.  Otherwise, the student's answer must have been wrong.

@alex: Yes, and I didn't mean to suggest otherwise - I had honestly
forgotten this, in fact. Mike Gage gave a very nice explanation of
this in the Q&A after the JMM open source in ed session. My point was
simply that certain things might be checkable without a lot of
overhead - and despite rjf's comments, I would argue that perhaps
sin^2+cos^2 is so common, in and out of calculus assignments, that
perhaps it would be a simplification worth checking all the time.

@rjf: Maxima I'm sure is good; if I used it exclusively perhaps I
would have known about Romberg instead of nintegral, though I don't
like using commands in class that I can't immediately explain why they
are called what they are. I had a lot of good discussions with
colleagues from my consortium at the Joint Meetings about what they
used, and Derive, Maple, Mathematica, Maxima all came up as ones
either currently used or used in the past. But what is wonderful
about the Sage project is being able to use things like Maxima along
with PARI (without actually having to use PARI), GAP, nice plotting,
and a programming language I can actually use as a not-very-
sophisticated programmer, so that I feel comfortable using Sage in
just about any course I teach - which, at a small liberal arts
college, is a pretty wacky variety at times. It's all under the hood;
the interface is the same for all of it, and I don't have to learn
(and more importantly, my students don't have to learn) a new syntax
etc. for every new class I teach (or am asked to teach).

Miscellanea:
1. I think that quadpack is used only because I used the
alternate .nintegral Maxima command instead of the (native Sage?)
numerical_integral command, which doesn't like extra variables very
much.
2. I appreciate the distinction between newly introduced bugs (which
is endemic to every software package, yikes) and unimproved older
functionality/bugs. At least for the philosophically inclined, that
is an important distinction; a project with new bugs is still going,
and one without bugs is too simple (pace TeX).

- kcrisman

Stan Schymanski

unread,
Feb 13, 2009, 4:09:10 AM2/13/09
to sage-...@googlegroups.com
Dear rjf,

rjf wrote:
> If there is an algorithm for simplify_full(), then presumably it could
> be programmed in Lisp, and incorporated in Maxima.
>
> You are invited to do so.
>
> I assume that there are examples for which it doesn't do what you
> want, and so you could argue that it should do more work.
>
Well, and there are examples where simplify_full() simply makes mistakes.

sage: sage: var('a b c')
(a, b, c)
sage: ((a*b - 0.5*a*(b - c))/a).simplify_radical()
0
sage: ((b - 0.5*(b - c))).simplify_radical()
0.500000000000000*c + 0.500000000000000*b

Clearly, the second result should be the same as the first one, as 'a'
cancels out.

See
http://groups.google.com/group/sage-support/browse_thread/thread/d5f945025165a099/aafb22cdac1b2a8a?lnk=gst&q=stan+maxima#aafb22cdac1b2a8a


Stan

Simon King

unread,
Feb 13, 2009, 5:30:32 AM2/13/09
to sage-devel
Hi all,

On Feb 13, 10:09 am, Stan Schymanski <schym...@gmail.com> wrote:
> rjf wrote:
> > If there is an algorithm for simplify_full(), then presumably it could
> > be programmed in Lisp, and incorporated in Maxima.
>
> > You are invited to do so.
>
> > I assume that there are examples for which it doesn't do what you
> > want, and so you could argue that it should do more work.

Sure. I am of course aware that there is no *algorithm* (in the sense
of always terminating in finite time, yielding the correct result) for
deciding whether f>0 for a general expression f, and also it is not
clear what a "simplification" should be.

So, one can't have more than a heuristics. But that's the point: The
better the heuristics, the happier the user --- provided the
heuristics works in a reasonable time. And any good heuristics should
at least catch the obvious cases.

Since Maxima certainly knows sin(x)^2+cos(x)^2 == 1 and various other
identities, it makes sense to try and apply them to f as
simplification rules. There are some rules, some may apply, some not
-- and if the rules are tested in a greedy way (always strictly reduce
the complexity), I guess this step is done in almost no time.

Certainly Maxima does try some rules, before refusing to answer "f>0"?
So, why not add one more easy rule that frequently occurs in real
life? Or does Maxima give up right away when being asked "f>0",
without to try anything?

Best regards
Simon

rjf

unread,
Feb 13, 2009, 10:35:24 AM2/13/09
to sage-devel


On Feb 13, 2:30 am, Simon King <k...@mathematik.uni-jena.de> wrote:
> Hi all,
>
> On Feb 13, 10:09 am, Stan Schymanski <schym...@gmail.com> wrote:
>
> > rjf wrote:
> > > If there is an algorithm for simplify_full(), then presumably it could
> > > be programmed in Lisp, and incorporated in Maxima.
>
> > > You are invited to do so.
>
> > > I assume that there are examples for which it doesn't do what you
> > > want, and so you could argue that it should do more work.
>
> Sure. I am of course aware that there is no *algorithm* (in the sense
> of always terminating in finite time, yielding the correct result) for
> deciding whether f>0 for a general expression f, and also it is not
> clear what a "simplification" should be.
>
> So, one can't have more than a heuristics. But that's the point: The
> better the heuristics, the happier the user --- provided the
> heuristics works in a reasonable time. And any good heuristics should
> at least catch the obvious cases.

"reasonable time" and "obvious cases" need to be defined.

>
> Since Maxima certainly knows sin(x)^2+cos(x)^2 == 1 and various other
> identities, it makes sense to try and apply them to f as
> simplification rules.

OK, here's an example.

Start with

(x-1)^150-(x-2)^150-2^150+1;

expand it and try to get back to the original. Is this case
"obvious"? How much
time is "reasonable"? And what heuristics would you apply?


There are some rules, some may apply, some not
> -- and if the rules are tested in a greedy way (always strictly reduce
> the complexity), I guess this step is done in almost no time.

Try your idea on the example above.

>
> Certainly Maxima does try some rules, before refusing to answer "f>0"?

Sure, though they are not necessary implemented as "rules".
Primarily local simplification such as special values e.g. sin(pi),
and
collection of terms of + and *.


> So, why not add one more easy rule that frequently occurs in real
> life? Or does Maxima give up right away when being asked "f>0",
> without to try anything?

No, but as I've suggested, you are mistaken in thinking that rules are
easy
to apply.


Since you and others you are sufficiently fixed on the sin^2+cos^2,
consider this example.

(sin(x)-1)^15-(cos(x)-2)^15-2^15+1;

expand it out and try finding sin^2+cos^2, And also find
(sin^2+cos^2)^2, etc.
which appear up to the 7th power. You can do this by long division.

What "rule" would you apply to get back to the small expression above?

RJF





>
> Best regards
>      Simon

William Stein

unread,
Feb 13, 2009, 10:47:34 AM2/13/09
to sage-...@googlegroups.com

Here are some things that the Ma*'s tend to do, which sometimes
academic math software projects don't:

* listening to what users want

* solving problems in practice that users care about, even if they are hard

In Sage development, we can and should continue to do the same.

- William

Tim Lahey

unread,
Feb 13, 2009, 11:02:19 AM2/13/09
to sage-...@googlegroups.com

On Feb 13, 2009, at 10:47 AM, William Stein wrote:

>>
>
> Here are some things that the Ma*'s tend to do, which sometimes
> academic math software projects don't:
>
> * listening to what users want
>

That's really not entirely true. A simple example is that I know many
people who would like better LaTeX export for Maple, but Maplesoft has
explicitly told me that they have no
desire to fix it. Instead, they want people to use their document
mode, despite the
fact that one can't submit Maple to a journal. So, I wrote some code
that applies
regular expressions to the LaTeX output to clean it up. Also, they
have different Vectors
that are not interchangeable (VectorCalculus and LinearAlgebra) but
yet, they haven't
fixed that.

> * solving problems in practice that users care about, even if they
> are hard
>

Yes and no. They care about selling licenses, so if they think they
can add something
that will help sell licenses, then they might consider it. It seems
like at least
Maplesoft is more interested in add-on products than improving Maple
at least lately.

> In Sage development, we can and should continue to do the same.

I certainly agree with this assessment. However, everyone has their
own specialities
and interests and it's difficult to get people to work outside that.
For instance,
working on integration and limits to move away from Maxima.

Talking about Sage with people, I get back the opinions that Sage
might make a replacement
for Magma since that's the interest that many of the Sage developers
have, but it's
unlikely to be a replacement for Maple/Mathematica since the symbolic
calculus isn't of
interest to Sage developers. I'd certainly like for this to be proven
wrong.

Cheers,

Tim.

---
Tim Lahey
PhD Candidate, Systems Design Engineering
University of Waterloo
http://www.linkedin.com/in/timlahey

William Stein

unread,
Feb 13, 2009, 11:09:59 AM2/13/09
to sage-...@googlegroups.com

In case nobody noticed, I'm not exactly a big fan of the Ma*'s. But
for whatever reason -- perhaps their purely selfish desire to make
money -- they do often try to listen to users and solve problems,
instead of making excuses like some posters in this thread.

>> In Sage development, we can and should continue to do the same.
>
> I certainly agree with this assessment. However, everyone has their
> own specialities
> and interests and it's difficult to get people to work outside that.
> For instance,
> working on integration and limits to move away from Maxima.
>
> Talking about Sage with people, I get back the opinions that Sage
> might make a replacement
> for Magma since that's the interest that many of the Sage developers
> have, but it's
> unlikely to be a replacement for Maple/Mathematica since the symbolic
> calculus isn't of
> interest to Sage developers. I'd certainly like for this to be proven
> wrong.

Let me just remind you that the goal of the Sage project is:

Create a viable open source free alternative to Magma, Maple,
Mathematica, and MATLAB.

I think nearly everyone who works on Sage is aware of and contributes
toward this goal.

You are a Sage developer and you are a counterexample to the statement
"symbolic calculus isn't of interest to Sage developers". There are I
bet dozens of other Sage developers who I could add to that list,
including myself, Burcin Erocal, Jason Grout, Mike Hansen and many
others.

-- William

--
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org

Robert Dodier

unread,
Feb 13, 2009, 12:36:24 PM2/13/09
to sage-devel
On Feb 13, 9:09 am, William Stein <wst...@gmail.com> wrote:

> In case nobody noticed, I'm not exactly a big fan of the Ma*'s. But
> for whatever reason -- perhaps their purely selfish desire to make
> money -- they do often try to listen to users and solve problems,
> instead of making excuses like some posters in this thread.

For my part I am willing to consider modifying asksign
(the Maxima function which originates the "Is foo positive or ...?"
questions) to apply some simplifications in an effort to
avoid some questions. Maybe it's as simple as wedging a
call to trigsimp into asksign. I haven't looked at it and I'm
distracted by other things so it's unlikely that I'll do it.
But if someone wants to look into it and make a report
and/or proposal to the Maxima mailing list, then more power
to you.

A larger issue is that asksign is generally obnoxious.
I've worked on ways to construct a conditional expression
instead of requiring user input. I'll be happy to collaborate
with interested parties on a solution.

Believe it or not, Maxima is not stuck in a time warp;
progress is actually possible, when we stop carping and
focus on solving problems instead.

FWIW

Robert Dodier

William Stein

unread,
Feb 13, 2009, 12:53:38 PM2/13/09
to sage-...@googlegroups.com

+10 !

-- William

Robert Dodier

unread,
Feb 13, 2009, 1:04:33 PM2/13/09
to sage-devel
On Feb 13, 2:09 am, Stan Schymanski <schym...@gmail.com> wrote:

> sage: sage: var('a b c')
> (a, b, c)
> sage: ((a*b - 0.5*a*(b - c))/a).simplify_radical()
> 0

I guess Sage has keepfloat=true somewhere.
That seems to trigger a bug in Maxima.

(%i7) radcan ((a*b - 0.5*a*(b - c))/a), keepfloat=true;
(%o7) 0
(%i8) radcan ((a*b - 0.5*a*(b - c))/a), keepfloat=false;

`rat' replaced -0.5 by -1/2 = -0.5
(%o8) (c+b)/2


If you have time please submit a bug report.
http://sourceforge.net/tracker/?func=add&group_id=4933&atid=104933

Does python have a built-in rational type?
If so maybe that would obviate keepfloat.
Maxima would rather work in exact numbers FWIW.

Robert Dodier

Jason Grout

unread,
Feb 13, 2009, 1:22:32 PM2/13/09
to sage-...@googlegroups.com
Robert Dodier wrote:
> On Feb 13, 2:09 am, Stan Schymanski <schym...@gmail.com> wrote:
>
>> sage: sage: var('a b c')
>> (a, b, c)
>> sage: ((a*b - 0.5*a*(b - c))/a).simplify_radical()
>> 0
>
> I guess Sage has keepfloat=true somewhere.
> That seems to trigger a bug in Maxima.
>
> (%i7) radcan ((a*b - 0.5*a*(b - c))/a), keepfloat=true;
> (%o7) 0
> (%i8) radcan ((a*b - 0.5*a*(b - c))/a), keepfloat=false;
>
> `rat' replaced -0.5 by -1/2 = -0.5
> (%o8) (c+b)/2
>

Yes, we have keepfloat on because of some disturbing simplifications
otherwise; see http://trac.sagemath.org/sage_trac/ticket/2400

>
> If you have time please submit a bug report.
> http://sourceforge.net/tracker/?func=add&group_id=4933&atid=104933
>
> Does python have a built-in rational type?

Of course, Sage has a rational type. Python 2.6 and 3.0 have a rational
type as well; see
http://docs.python.org/3.0/whatsnew/2.6.html#pep-3141-a-type-hierarchy-for-numbers


Jason

Stan Schymanski

unread,
Feb 16, 2009, 4:53:51 AM2/16/09
to sage-devel
Thanks a lot for checking up on this, Robert!
This is now bug #2604950 in Maxima.

Stan

On Feb 13, 7:04 pm, Robert Dodier <robert.dod...@gmail.com> wrote:
> On Feb 13, 2:09 am, Stan Schymanski <schym...@gmail.com> wrote:
>
> > sage: sage: var('a b c')
> > (a, b, c)
> > sage:  ((a*b - 0.5*a*(b - c))/a).simplify_radical()
> > 0
>
> I guess Sage has keepfloat=true somewhere.
> That seems to trigger a bug in Maxima.
>
> (%i7)radcan((a*b - 0.5*a*(b - c))/a), keepfloat=true;
> (%o7) 0
> (%i8)radcan((a*b - 0.5*a*(b - c))/a), keepfloat=false;
>
> `rat' replaced -0.5 by -1/2 = -0.5
> (%o8) (c+b)/2
>
> If you have time please submit a bug report.http://sourceforge.net/tracker/?func=add&group_id=4933&atid=104933
Reply all
Reply to author
Forward
0 new messages