New implementation of fraction fields

1 view
Skip to first unread message

Michel

unread,
May 27, 2007, 1:03:04 PM5/27/07
to sage-devel
Hi,

The current implementation of fraction fields is slow. This is very
apparent for rings with a slow gcd algorithm. But even if this is not
the
case then constantly taking gcd's eventually becomes a major
bottleneck.
See the example below which uses the new libSingular.

It was pointed out on this list that not taking gcd's at every step
quickly leads to combinatorial explosion when evaluating polynomials
with
a large number of terms.

I think I have found a solution for this last problem. The trick is to
cache partial partial factorizations of denominators. I.e. for example
if we performa an addition a/b+c/d=(ad+cb)/ad then we remember that
the denominator
ad is really a*d. If next time we have to add (ad+cb)/ad+(c/d)^2 we
can
take as new denominator ad^2 instead of ad^3.

I have created a new FractionField_cache implementation which uses
this idea.
See
http://emis.uhasselt.be/sage_patches/fraction_field_cache.
The specific patch is here
http://emis.uhasselt.be/sage_patches/fraction_field_cache/factor_cacheing.patch
It depends on the one_element patch by William. It creates three new
files

factor_cache.pyx
fraction_field_cache.py
fraction_field_element_cache.py

and modifies setup.py (to build the extension factor_cache). To use
the
new implementation do

from sage.rings.fraction_field_cache import FractionField_cache

Example (using the new libSingular)

sage: from sage.rings.fraction_field_cache import FractionField_cache
sage: R.<x,y,z>=QQ[]
sage: Q=FractionField_cache(R,auto_reduce=False)
sage: def t():((x+y+z)**20)(Q(1,x+y),y,y**100).reduce()
....
sage: time t()
CPU times: user 2.11 s, sys: 0.02 s, total: 2.13 s
Wall time: 2.27 <== compare this

sage: from sage.rings.fraction_field import FractionField_generic
sage: Q=FractionField_generic(R)
sage: def t():((x+y+z)**20)(Q(1,x+y),y,y**100)
...
sage: time t()
CPU times: user 12.49 s, sys: 0.06 s, total: 12.54 s
Wall time: 14.36 <== to this


For more benchmarks see
http://emis.uhasselt.be/sage_patches/fraction_field_cache/summary.log
http://emis.uhasselt.be/sage_patches/fraction_field_cache/benchmark.log

Yet more benchmarks can be created with the following program
http://emis.uhasselt.be/sage_patches/fraction_field_cache/benchmark.py

Notes:
(1) When testing, please be careful to use the constructor
Q(num,denom) to create
fractions. A FractionField_cache constructed using __init__ is of
course not globally unique. So fractions of the form num/denom will
live in a FractionField_generic and not in Q.
(2) I did not look at coercions although if there are problems they
should
be trivial to fix.
(3) The actual cacheing of partial factorizations of denominators
is done in the extension "factor_cache". Please read the docstring
there
for the implementation.
(4) I hope to write an implementation of the subresultant algorithm
for
poly_dicts (if someone else does not do it). Then every polynomial
ring will have a fallback gcd algorithm (if the base ring has one).
Although I think I can optimize it quite a bit this algorithm will
of course be slower than more specialized ones. For this I would
like the FractionField implementation to be more optimized.

Question: What should I do now? I think the implementation of
FractionField_cache
improves upon the implementation of FractionField_generic. Please
give some feedback.

Michel

PS. I has been discussed to perform the final reduce only for example
if a fraction
is printed. I have not done this yet since it makes benchmarking
somewhat more
tricky. But I can do this if people think this is a better solution.

David Harvey

unread,
May 27, 2007, 1:28:25 PM5/27/07
to sage-...@googlegroups.com
What happens if I add a/(bc) to d/(be)? Does the implementation attempt
to recognise that there is a common factor in the denominators? (i.e.
assuming the system does not already know about the factor of b.) Or do
I just end up with the numerator a(be) + d(bc) and the "factorised"
denominator (bc)(be)? Is this still true even if b is quite a large
object compared to a, c, d, e?

And I assume if you have a fraction a/b, where the system has
remembered some factorisation information about b, then if you compute
1/(a/b) it throws that information away?

david

Michel

unread,
May 27, 2007, 2:21:27 PM5/27/07
to sage-devel
On May 27, 7:28 pm, David Harvey <dmhar...@math.harvard.edu> wrote:
> What happens if I add a/(bc) to d/(be)? Does the implementation attempt
> to recognise that there is a common factor in the denominators? (i.e.
> assuming the system does not already know about the factor of b.)

The implementation does not attempt to recognize factors on its own.
I think that would be much too expensive. But typically factored
numbers
come for prior operations and in that case of course they are
recognized.
The problem is not to speed up a single addition of fractions but the
case where you evaluate for example a polynomial (or rational
function) with a large number of terms
on fractions.

>
> And I assume if you have a fraction a/b, where the system has
> remembered some factorisation information about b, then if you compute
> 1/(a/b) it throws that information away?
>

No it does not throw that information away. Internally elements of the
ring
are stored in "factored" form r_1^{e_1}...r_n^{e_n} (with a lazily
evaluated
cached product). This
information is preserved under "mulltiplicative" operations.
(multiplication, division, (mock)gcd, (mock)lcm). It is destroyed
under additive operations.

Michel

Michel

unread,
May 27, 2007, 2:26:43 PM5/27/07
to sage-devel


> case where you evaluate for example a polynomial (or rational
> function) with a large number of terms
> on fractions.

And actually if you have a slow gcd algorithm you
don't need a large number of terms before things become
really very slow.

Michel

David Harvey

unread,
May 27, 2007, 2:47:10 PM5/27/07
to sage-...@googlegroups.com

On May 27, 2007, at 2:21 PM, Michel wrote:

>> And I assume if you have a fraction a/b, where the system has
>> remembered some factorisation information about b, then if you compute
>> 1/(a/b) it throws that information away?
>>
> No it does not throw that information away. Internally elements of the
> ring
> are stored in "factored" form r_1^{e_1}...r_n^{e_n} (with a lazily
> evaluated
> cached product). This
> information is preserved under "mulltiplicative" operations.
> (multiplication, division, (mock)gcd, (mock)lcm). It is destroyed
> under additive operations.

Ah I see. That is quite interesting.

Well I have another idea, I'm not sure to what extent this is already
implemented in your factor_cache.pyx, or to what extent it even makes
sense. (I haven't actually looked at your code yet.)

It sounds like this factor caching thing actually has nothing to do
with fraction fields as such. Rather it sounds like one could implement
a wrapper around a generic ring which remembers factorisations of
elements. I'm imagining something like

R = some ring
S = FactorCache(R)

So now S is a ring, whose elements are products of elements of R.
Multiplication does the obvious thing, addition and subtraction I
suppose checks for common factors and just expands the rest? (I suppose
this is very much like the Factorization class already in SAGE, except
it would support arithmetic. Or maybe the Factorization class already
supports arithmetic, I don't know.)

Then you would just write

K = FractionField(S)

Would it even be necessary to write special code in the fraction field
class if it was done this way? (Hmmm.... thinking aloud in public
here....)

David

Nick Alexander

unread,
May 27, 2007, 3:05:33 PM5/27/07
to sage-...@googlegroups.com
David Harvey <dmha...@math.harvard.edu> writes:

> On May 27, 2007, at 2:21 PM, Michel wrote:
>
>>> And I assume if you have a fraction a/b, where the system has
>>> remembered some factorisation information about b, then if you compute
>>> 1/(a/b) it throws that information away?
>>>
>> No it does not throw that information away. Internally elements of the
>> ring
>> are stored in "factored" form r_1^{e_1}...r_n^{e_n} (with a lazily
>> evaluated
>> cached product). This
>> information is preserved under "mulltiplicative" operations.
>> (multiplication, division, (mock)gcd, (mock)lcm). It is destroyed
>> under additive operations.
>
> Ah I see. That is quite interesting.
>
> Well I have another idea, I'm not sure to what extent this is already
> implemented in your factor_cache.pyx, or to what extent it even makes
> sense. (I haven't actually looked at your code yet.)
>
> It sounds like this factor caching thing actually has nothing to do
> with fraction fields as such. Rather it sounds like one could implement
> a wrapper around a generic ring which remembers factorisations of
> elements. I'm imagining something like
>
> R = some ring
> S = FactorCache(R)

I vote +1 to this idea, because it is more modular. Does this idea
only make sense over division rings? Because it is sometimes nice to
do integer arithmetic and have everything factored, as well.

I will referee such a patch, if needed.

Nick

Michel

unread,
May 27, 2007, 3:09:07 PM5/27/07
to sage-devel

> R = some ring
> S = FactorCache(R)
>
> So now S is a ring, whose elements are products of elements of R.
> Multiplication does the obvious thing, addition and subtraction I
> suppose checks for common factors and just expands the rest? (I suppose
> this is very much like the Factorization class already in SAGE, except
> it would support arithmetic. Or maybe the Factorization class already
> supports arithmetic, I don't know.)
>
You are indeed correct that even under addition and subtraction it
would be possible to preserve common factors. This is an elegant
idea which would be easy to implement. I don't know if it would
yield
any speedups for fraction fields.

I looked at the Factorization class but it was not optimized enough.
One thing I realized early on is that factor cacheing needs to *very*
fast
or it becomes a bottleneck in itself. For example I use "is" as
test for equality and not ==.


> Then you would just write
>
> K = FractionField(S)
>
> Would it even be necessary to write special code in the fraction field
> class if it was done this way? (Hmmm.... thinking aloud in public
> here....)

Well currently the module fraction_field_cache.py is almost the
same as fraction_field.py. So indeed very few changes are necessary.
One can possibly bring it down to zero but there are a few technical
points
like the meaning of is_one(). In factor_cache.pyx is_one() just means
zero
factors (anything else is too expensive). But for example to print a
fraction you need the real is_one(). So I use
_value(denominator).is_one()
(where _value(object) is the cached product).

So you idea is sensible but I don't know if there are really any other
applications
beyond fraction fields.

Michel

Michel

unread,
May 28, 2007, 4:36:56 AM5/28/07
to sage-devel
> I vote +1 to this idea, because it is more modular. Does this idea
> only make sense over division rings? Because it is sometimes nice to
> do integer arithmetic and have everything factored, as well.
>
> I will referee such a patch, if needed.

Hi,

I would very much prefer the patch to be reviewed as is. Substitution
of
fraction field elements into rational functions is important for
algebraic
geometry (that's how I found this problem) and this patch speeds it up
exponentially.
So the patch does not attack an abstract problem but a *real* problem.
I do not deny
that things can probably optimized further (e.g. by rewriting
fraction_field_element_cache.py
in pyrex, removing the overhead python<->pyrex) but
it would be useful to have some indication that I am not losing my
time doing this.
Note that for now my patch does not touch any other parts of sage so
it does not require
any big design decisions.

>Does this idea
>only make sense over division rings? Because it is sometimes nice to
>do integer arithmetic and have everything factored, as well.

The factor cache works of course for non-fields. There is not much
point
taking the fraction field of a field, is there:-) Naturally you can
use it for integers but it would
probably be very slow since integer arithmetic is already extremely
fast.
It is really meant for rings for which no specialized arithmetic is
available.

Michel

David Harvey

unread,
May 28, 2007, 9:23:49 AM5/28/07
to sage-...@googlegroups.com

On May 28, 2007, at 4:36 AM, Michel wrote:

>> Does this idea
>> only make sense over division rings? Because it is sometimes nice to
>> do integer arithmetic and have everything factored, as well.
>
> The factor cache works of course for non-fields. There is not much
> point
> taking the fraction field of a field, is there:-) Naturally you can
> use it for integers but it would
> probably be very slow since integer arithmetic is already extremely
> fast.
> It is really meant for rings for which no specialized arithmetic is
> available.

Actually the gcd code in the most recent release of GMP is O(N^2).
(This is supposed to be fixed in the upcoming GMP 5, but it may not
be for several years).

So for example:

sage: a = ZZ.random_element(2**1000000)
sage: b = ZZ.random_element(2**1000000)
sage: c = ZZ.random_element(2**1000000)

sage: time x = a*b
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02

sage: time x = 1/(a*b) + 1/(a*c)
CPU times: user 9.58 s, sys: 0.01 s, total: 9.59 s
Wall time: 9.59

Presumably if this was done in the factor cache fraction field it
would be much faster.

I think the general case of a factor cache for rings could be useful
in some situations, but I admit I'm at a loss to name any specific
examples.

I think I would generally support including the code you are
proposing, assuming a referee is happy with the code. (I insist
however on the spelling "caching" rather than "cacheing" :-))

Qualification: obviously, if this ever gets integrated with the
FractionField constructor, it should be only *optional*... I would
expect there would be many applications where the caching would
actually slow things down. What do you think about this?

David

Nick Alexander

unread,
May 28, 2007, 1:43:09 PM5/28/07
to sage-...@googlegroups.com
Michel <michel.va...@uhasselt.be> writes:

> Question: What should I do now? I think the implementation of
> FractionField_cache
> improves upon the implementation of FractionField_generic. Please
> give some feedback.

Hi Michel,

I will referee this patch. Since it is a little involved, I may take
a few days, but I'll try to get back to you soon.

Thanks for such a useful contribution,
Nick

Michel

unread,
May 28, 2007, 2:02:26 PM5/28/07
to sage-devel

>
> I think I would generally support including the code you are
> proposing, assuming a referee is happy with the code. (I insist
> however on the spelling "caching" rather than "cacheing" :-))
>
No problem. For me cache is a french word. Hence cacheing. But Google
seems to favor caching by far....

> Qualification: obviously, if this ever gets integrated with the
> FractionField constructor, it should be only *optional*... I would
> expect there would be many applications where the caching would
> actually slow things down. What do you think about this?
>

Well if you look at the benchmarks you see that if libSingular is used
then for very small examples caching is indeed is a bit slower that
the generic
implementation. I would I have to investigate what the cause is of
this.

This being said I actually want FractionField_cache to be optional so
that I can improve
it without disrupting anything else. Om my sage installation I have
given the FractionField method an extra optional parameter 'cache='
which
when true returns an instance of FractionField_cache instead of
FractionField_generic.

Michel

David Harvey

unread,
May 28, 2007, 2:07:59 PM5/28/07
to sage-...@googlegroups.com

On May 28, 2007, at 2:02 PM, Michel wrote:

> This being said I actually want FractionField_cache to be optional so
> that I can improve
> it without disrupting anything else. Om my sage installation I have
> given the FractionField method an extra optional parameter 'cache='
> which
> when true returns an instance of FractionField_cache instead of
> FractionField_generic.

I think having a flag like that sounds very reasonable.

david

Justin C. Walker

unread,
May 28, 2007, 2:09:48 PM5/28/07
to sage-...@googlegroups.com

On May 27, 2007, at 11:47 , David Harvey wrote:
> On May 27, 2007, at 2:21 PM, Michel wrote:
[snip]

> Well I have another idea, I'm not sure to what extent this is already
> implemented in your factor_cache.pyx, or to what extent it even makes
> sense. (I haven't actually looked at your code yet.)
>
> It sounds like this factor caching thing actually has nothing to do
> with fraction fields as such. Rather it sounds like one could
> implement
> a wrapper around a generic ring which remembers factorisations of
> elements. I'm imagining something like
>
> R = some ring
> S = FactorCache(R)
>
> So now S is a ring, whose elements are products of elements of R.
> Multiplication does the obvious thing, addition and subtraction I
> suppose checks for common factors and just expands the rest? (I
> suppose
> this is very much like the Factorization class already in SAGE, except
> it would support arithmetic. Or maybe the Factorization class already
> supports arithmetic, I don't know.)
>
> Then you would just write
>
> K = FractionField(S)
>
> Would it even be necessary to write special code in the fraction field
> class if it was done this way? (Hmmm.... thinking aloud in public
> here....)

An intriguing idea... A couple of questions:

Should this be a implementation concept, rather than an "API
concept"? Not quite sure what I am asking, but this feels to me like
a good idea for implementation, but not for "the user" to fiddle
with. What is the difference between R and S for your average user?
Would FractionField(R) and FractionField(S) be the same?

How does this relate to rings like Z[\sqrt{-5}]? Keeping
factorizations around for these seems problematic...

Justin

--
Justin C. Walker, Curmudgeon-At-Large
Institute for the Absorption of Federal Funds
--------
If you're not confused,
You're not paying attention
--------

Justin C. Walker

unread,
May 28, 2007, 2:14:39 PM5/28/07
to sage-...@googlegroups.com

On May 28, 2007, at 06:23 , David Harvey wrote:
> On May 28, 2007, at 4:36 AM, Michel wrote:
[snip]

> I think I would generally support including the code you are
> proposing, assuming a referee is happy with the code. (I insist
> however on the spelling "caching" rather than "cacheing" :-))

Ewww...'caching' just looks...wrong.

Justin

--
Justin C. Walker, Curmudgeon at Large


Institute for the Absorption of Federal Funds

-----------
My wife 'n kids 'n dogs are gone,
I can't get Jesus on the phone,
But Ol' Milwaukee's Best is my best friend.
-----------


Robert Bradshaw

unread,
May 28, 2007, 6:47:13 PM5/28/07
to sage-...@googlegroups.com
I'm kind of late jumping into this thread, but I have some concerns.
Unless the factorization is known completely (or gcd's are taken on
the unknown part), I still think this can lead to combinatorial
explosion fairly quickly. For example,

sage: from sage.rings.fraction_field_cache import FractionField_cache

sage: from sage.rings.fraction_field import FractionField_generic

sage: R.<x,y,z>=QQ[]
sage: T = [ZZ.random_element()*x + ZZ.random_element()*y +
ZZ.random_element()*z for _ in range(10)];
sage: S = [prod(random_sublist(T, .2)) for _ in range(50)]

sage: Q=FractionField_cache(R,auto_reduce=False)
sage: time f = sum([~Q(s) for s in S])
CPU times: user 6.41 s, sys: 0.01 s, total: 6.42 s
Wall time: 6.45
sage: len(str(f))
2011614
sage: time f.reduce()
^C ### Got really tired of waiting after 10 min....
sage: sage: time f(2,3,4)
CPU times: user 9.68 s, sys: 0.03 s, total: 9.71 s
Wall time: 10.08
244271/26730

sage: sage: Q=FractionField_generic(R)
sage: time f = sum([~Q(s) for s in S])
CPU times: user 0.15 s, sys: 0.00 s, total: 0.15 s
Wall time: 0.15
sage: len(str(f))
5430
sage: time f.reduce() # it's already reduced, but it doesn't know that
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00
sage: time f(2,3,4)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00
244271/26730


That being said, I think it could maybe have its use, as could a
general factored element ring (though I too can't think of any uses
off the top of my head).

- Robert

Nick Alexander

unread,
May 28, 2007, 7:35:44 PM5/28/07
to sage-...@googlegroups.com
Robert Bradshaw <robe...@math.washington.edu> writes:

> I'm kind of late jumping into this thread, but I have some concerns.
> Unless the factorization is known completely (or gcd's are taken on
> the unknown part), I still think this can lead to combinatorial
> explosion fairly quickly. For example,

Michel and I are working through the refereeing process. I will try
to take your examples into account, and have them as doctests. Please
ping me again if these troubles remain after the patch is applied.

Nick

Robert Bradshaw

unread,
May 28, 2007, 8:04:37 PM5/28/07
to sage-...@googlegroups.com
I don't think you want to them as doctests, they take dozens of
minutes to run! (That is, for the cached rings which take 10000 times
longer to compute.) Ideally this issue can be fixed, but I'm not sure
how...

- Robert

Michel

unread,
May 29, 2007, 3:50:25 AM5/29/07
to sage-devel
Hi,

Thanks for taking the time to make up this example. I think it is
good to put it in the documentation.
Yes you can break factorcaching (with auto_reduce=False) the way you
do it. ***But***
let me point out that by rewriting your example only slightly it
becomes *much* faster than the current
implementation

sage: from sage.rings.fraction_field_cache import FractionField_cache
sage: from sage.rings.fraction_field import FractionField_generic
sage: R.<x,y,z>=QQ[]
sage: T = [ZZ.random_element()*x + ZZ.random_element()*y +
ZZ.random_element()*z for _ in range(10)];

sage: Q=FractionField_cache(R,auto_reduce=False)
sage: TT=[Q(s,1) for s in T]
sage: SS = [prod(random_sublist(TT, .2)) for _ in range(50)]
sage: time f = sum([~s for s in SS])
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s #look here
Wall time: 0.06
sage: len(str(f))
5400
sage: time f.reduce()


CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s

Wall time: 0.01


sage: time f(2,3,4)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00

8215193/2109360
sage: Q=FractionField_generic(R)
sage: S=[numerator(s) for s in SS]
sage: time f=sum([~Q(s) for s in S])
CPU times: user 0.35 s, sys: 0.00 s, total: 0.35 s # versus here
Wall time: 0.74
sage: len(str(f))
5402
sage: time f.reduce()
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.02


sage: time f(2,3,4)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s

Wall time: 0.01
8215193/2109360

So maybe in some (I claim non-typical) cases factorcaching requires a
bit understanding from the
user but the payoff is enormous. Anyway as I said factor caching
should be optional. And furthermore there
is an auto_reduce parameter, which when true emulates the current
behaviour.

Michel

PS. Perhaps it is possible to make your example work out of the box by
somehow using the real gcd
for elements which have no cached factors, or to apply the real gcd to
the factors, caching the result etc...
But I would prefer to take the wrinkles out of the current
implementation first.


On May 29, 2:04 am, Robert Bradshaw <rober...@math.washington.edu>
wrote:


> I don't think you want to them as doctests, they take dozens of
> minutes to run! (That is, for the cached rings which take 10000 times
> longer to compute.) Ideally this issue can be fixed, but I'm not sure
> how...
>
> - Robert
>
> On May 28, 2007, at 4:35 PM, Nick Alexander wrote:
>

Robert Bradshaw

unread,
May 29, 2007, 4:43:01 AM5/29/07
to sage-...@googlegroups.com
On May 29, 2007, at 12:50 AM, Michel wrote:

> Hi,
>
> Thanks for taking the time to make up this example. I think it is
> good to put it in the documentation.
> Yes you can break factorcaching (with auto_reduce=False) the way you
> do it. ***But***
> let me point out that by rewriting your example only slightly it
> becomes *much* faster than the current
> implementation

[...]

> So maybe in some (I claim non-typical) cases factorcaching requires a
> bit understanding from the
> user but the payoff is enormous. Anyway as I said factor caching
> should be optional.

Factor caching should be optimal if the operands are completely
factored, which is essentially what you enforced here for the speedup:

> sage: TT=[Q(s,1) for s in T]

However, I think it is to much to assume that elements are always
known factors of primes, and factoring the input on initialization
would probably be prohibitively expensive. If it's not (even an
average of two prime factors per denominator) things may quickly get
out of hand which was the point of my example. Doing arithmetic on a
set of elements with an absolutely bounded denominator is not that
atypical, nor is being handed two fraction field elements with an
unknown non-trivial gcd in their denominator (especially if they
relate to the same problem).

> And furthermore there
> is an auto_reduce parameter, which when true emulates the current
> behaviour.

This is good.

> PS. Perhaps it is possible to make your example work out of the box by
> somehow using the real gcd
> for elements which have no cached factors, or to apply the real gcd to
> the factors, caching the result etc...

I did think about this... One would then have to find gcd's between
all pairs of (non-prime) factors which would get expensive very
quickly compared to a single gcd of the product.

> But I would prefer to take the wrinkles out of the current
> implementation first.

I agree.

I don't want to give the impression that I think your code is without
merit--I think it is quite worthwhile and probably worth including in
SAGE (though that's in Nick and William's ballpark now). For example,
it's much better for rings that don't implement any gcd, or are
inexact where no reduction is currently performed at all. And
obviously you have a case in mind where it helps immensely. I do have
to say I am wary of including something (by default at least) that
can give a 10x speedup in some cases and 1000x slowdown in others.

- Robert

Michel

unread,
May 29, 2007, 4:12:22 PM5/29/07
to sage-devel

>
> I did think about this... One would then have to find gcd's between
> all pairs of (non-prime) factors which would get expensive very
> quickly compared to a single gcd of the product.

I am not sure this is so clear cut in general. But since your example
is in a sense the worst possible case, probably no algorithm
can improve on taking gcd's after every step.

> I do have
> to say I am wary of including something (by default at least) that
> can give a 10x speedup in some cases and 1000x slowdown in others.
>

Well there seems to be a misunderstanding.
I want to repeat that I never wanted this to be enabled by default.
I want to give the FractionField factory method an optional parameter
'cache' which when True would return an instance of
FractionField_cache
instead of FractionField_generic. The documentation would explain what
are the benefits and pitfalls of this option. Your example is
perfectly
good for that.

Michel


Michel

Reply all
Reply to author
Forward
0 new messages