Understanding series expansion performance

26 views
Skip to first unread message

Tobias Neumann

unread,
Mar 18, 2021, 4:12:03 PM3/18/21
to FriCAS - computer algebra system
I am currently trying to see if I can accelerate my code around series 
expansions. I really like that series are lazy in FriCAS, but I wonder 
what impact multiple operations on the series (or the underlying 
stream), like integration etc. have. I would assume that up to memory 
requirements that "save" or delay all the performed operations, there should be no 
performance penalty compared to a non-lazy evaluation.

Anyway, while testing I came up with this example, where some expression
is series expanded in two different ways. 

)clear completely
)set mes tim on

x := taylor(x');
s1 := x^2 + cos(x) + x^8/ sqrt(x-1);

s2 := taylor(y^2 + cos(y) + y^8/ sqrt(y-1),y=0);

approximate(s1,450);
approximate(s2,450);

approximate(s1,450);
approximate(s2,450);


When running the code (see below) the first method of evaluation is three 
times faster than the second. (A second evaluation runs in the same time 
for both cases, probably because the results are cached, but that's not 
relevant for me).

In my case I start with an Expression(Integer), so I (obviously) can only 
use the second version from the ExpressionToUnivariatePowerSeries package. 
But my idea is that maybe I can improve the performance if I understand 
why the first method is so much faster.

I would have expected that the ExpressionToUnivariatePowerSeries package
does something like "subst(expression, x=series('x))", symbolically 
speaking, but then it should be just as fast.

In principle I see two ways to do the series expansions: Compute the 
expansion as a whole for the whole expression (for example by taking 
derivatives for a Taylor series). Or do it for each function separately, 
and use the Cauchy product to multiply the individual series together, or 
add them, etc. For the first series expansion example this is clearly 
done, but maybe not for ExpressionToUnivariatePowerSeries and this could 
be the difference?

Thanks,
Tobias

PS: It's possible that this message appears twice. I tried out the gmane.io mailinglist <-> NNTP gateway, but posting didn't seem to work, so I send it directly again.

---- running the code: 

approximate(s1,450);
Type: Expression(Integer)
                                 Time: 0 (IN) + 2.20 (EV) + 0 (OT) = 2.20 sec


approximate(s2,450);
Type: Expression(Integer)
                           Time: 0.00 (IN) + 6.12 (EV) + 0.00 (OT) = 6.13 sec

approximate(s1,450);
Type: Expression(Integer)
                                 Time: 0 (IN) + 1.36 (EV) + 0 (OT) = 1.36 sec

approximate(s2,450);
Type: Expression(Integer)
                                 Time: 0 (IN) + 1.37 (EV) + 0 (OT) = 1.37 sec

Ralf Hemmecke

unread,
Mar 18, 2021, 5:12:21 PM3/18/21
to fricas-devel
> I would assume that up to memory requirements that "save" or delay
> all the performed operations, there should be no performance penalty
> compared to a non-lazy evaluation.

Well... I programmed something with Laurent series and just by creating
my series more intelligently could speed up the computation of the
coefficients. Understand that "lazy" actually means that with each
operation on series, you build up a data structure. This data structure
must be evaluated to get the next coefficient when you need it.

Unfortunately, one cannot say anything generic, but for special series
that might make a difference.

> When running the code (see below) the first method of evaluation is
> three times faster than the second. (A second evaluation runs in the
> same time for both cases, probably because the results are cached,
> but that's not relevant for me).

Strange, on my computer it doesn't show this.

(4) -> approximate(s1,450);

Type:
Expression(Integer)
Time: 0.01 (IN) + 5.99 (EV) + 0.01 (OT) =
6.01 sec
(5) -> approximate(s1,450);

Type:
Expression(Integer)
Time: 0.00 (IN) + 4.84 (EV) + 0.00 (OT) =
4.84 sec
(6) -> approximate(s2,450);

Type:
Expression(Integer)
Time: 0 (IN) + 6.51 (EV) + 0.00 (OT) =
6.51 sec
(7) -> approximate(s2,450);

Type:
Expression(Integer)
Time: 0 (IN) + 1.99 (EV) + 0.00 (OT) =
1.99 sec


Not that I first use s1 twice and then s2 twice.

Anyway, I am somehow pretty sure you do not want to call "approximate"
anyway. Not that the result is no longer a series, but an element of the
coefficient domain (Expression(Integer)).

> In my case I start with an Expression(Integer), so I (obviously) can
> only use the second version from the
> ExpressionToUnivariatePowerSeries package. But my idea is that maybe
> I can improve the performance if I understand why the first method is
> so much faster.

I did not really get what you are up to, but if I were you, I would try
to construct my series and then work in a series domain instead of going
back and forth between Expression(Integer) and the series domain.

Try this...

E ==> AlgebraicNumber
x := monomial(1,1)$UnivariatePuiseuxSeries(E, 'x, 0)
s1 := x^2 + cos(x) + x^8/ sqrt(x-1);
l1 := [s1.i for i in 0..450];

Evaluate the last line a second time. Then you see what caching is.

And as a hint... look here...

https://github.com/fricas/fricas/blob/master/src/algebra/puiseux.spad#L366

There are a certain number of functions that are already known to the
FriCAS series domains.

Ralf

Waldek Hebisch

unread,
Mar 18, 2021, 5:17:29 PM3/18/21
to fricas...@googlegroups.com
Well, we have:

(1) -> y^2 + cos(y) + y^8/ sqrt(y-1)

2 +-----+ 8
(cos(y) + y )\|y - 1 + y
(1) --------------------------
+-----+
\|y - 1
Type: Expression(Integer)


Replacing in this y by series for x I get comparable time to expanding
expression.

> In principle I see two ways to do the series expansions: Compute the
> expansion as a whole for the whole expression (for example by taking
> derivatives for a Taylor series). Or do it for each function separately,
> and use the Cauchy product to multiply the individual series together, or
> add them, etc. For the first series expansion example this is clearly
> done,

No, FriCAS uses derivatives only as fallback, when it knows no
better method. For elementaty functions, "standard" special
functions and few other things FriCAS has much faster methods.

> but maybe not for ExpressionToUnivariatePowerSeries and this could
> be the difference?

I think it is order of operations that matters. s1 uses good
formula, formula used to compute s2 is subject to automatic
transformatins and this makes it less efficient.

--
Waldek Hebisch

Tobias Neumann

unread,
Mar 18, 2021, 5:40:17 PM3/18/21
to FriCAS - computer algebra system
@Ralf: I see. I can't seem to reproduce the *big* difference anymore. When I posted it I tried it with the s1 and s2 evaluations exchanged
to exclude CPU scheduling issues (boost kicking in after the first one) and other problems, so who knows what happened.
In practice I don't use approximate, but handle the underlying streams myself
because I want to dynamically evaluate up to a certain point (error). This was just an example to play around and I found it strange
that there is such a big difference. But Waldek figured out why there is *some* difference:

@Waldek: Thanks, that is another "d'oh" moment. The second way clearly starts with a more complicated expression.

I've noticed before that if I do a ratDenom() of my expression to eliminate squareroots in the denominator the series
expansions are *significantly* faster. Given that improvement, I was playing around to see if there are other things that
could be tuned. See the dramatic improvement below:

s3 := taylor(ratDenom(y^2 + cos(y) + y^8/ sqrt(y-1)),y=0);

approximate(s1,450);
                           Time: 0.01 (IN) + 5.72 (EV) + 0.01 (OT) = 5.74 sec

approximate(s2,450);
                                 Time: 0 (IN) + 6.04 (EV) + 0 (OT) = 6.04 sec

approximate(s3,450);
                                 Time: 0 (IN) + 0.87 (EV) + 0 (OT) = 0.87 sec

Tobias Neumann

unread,
Mar 19, 2021, 3:18:36 PM3/19/21
to fricas...@googlegroups.com
I am currently trying to see if I can accelerate my code around series
expansions. I really like that series are lazy in FriCAS, but I wonder
what impact multiple iterated operations on the series (or the underlying
stream), like integration etc. have. I would assume that up to memory
requirements that "save" all the performed operations, there should be no
performance penalty compared to a non-lazy evaluation.

Anyway, while testing I came up with this example, where some expression
is series expanded in two different ways.

)clear completely
)set mes tim on

x := taylor(x');
s1 := x^2 + cos(x) + x^8/ sqrt(x-1);

s2 := taylor(y^2 + cos(y) + y^8/ sqrt(y-1),y=0);

approximate(s1,450);
approximate(s2,450);

approximate(s1,450);
approximate(s2,450);


When running the code (see below) the first method of evaluation is three
times faster than the second. (A second evaluation runs in the same time
for both cases, probably because the results are cached, but that's not
relevant for me).

In my case I start with an Expression(Integer), so I (obviously) can only
use the second version from the ExpressionToUnivariatePowerSeries package.
But my idea is that maybe I can improve the performance if I understand
why the first method is so much faster.

I would have expected that the ExpressionToUnivariatePowerSeries package
does something like "subst(expression, x=series('x))", symbolically
speaking, but then it should be just as fast.

In principle I see two ways to do the series expansions: Compute the
expansion as a whole for the whole expression (for example by taking
derivatives for a Taylor series). Or do it for each function separately,
and use the Cauchy product to multiply the individual series together, or
add them, etc. For the first series expansion example this is clearly
done, but maybe not for ExpressionToUnivariatePowerSeries and this could
be the difference?

Thanks,
Tobias


---- running the code:

approximate(s1,450);


Type:
Expression(Integer)
Time: 0 (IN) + 2.20 (EV) + 0 (OT) = 2.20
sec
approximate(s2,450);


Type:
Expression(Integer)
Time: 0.00 (IN) + 6.12 (EV) + 0.00 (OT) = 6.13
sec

approximate(s1,450);


Type:
Expression(Integer)
Time: 0 (IN) + 1.36 (EV) + 0 (OT) = 1.36
sec
approximate(s2,450);


Type:
Expression(Integer)
Time: 0 (IN) + 1.37 (EV) + 0 (OT) = 1.37
sec
(

Reply all
Reply to author
Forward
0 new messages