Algebraic operation

6 views
Skip to first unread message

Waldek Hebisch

unread,
Jul 12, 2023, 7:09:56 PM7/12/23
to fricas...@googlegroups.com
Partly motiveted by Ralf, partly by needs of various algorithms
I looked at "algebraic" operations that we offer, and it seems
that we miss or at least make hard to use some operation. For
example consider 'minimalPolynomial', we have that in
SimpleAlgebraicExtention, FramedAlgebra, finite fields and in
Complex. However, AFAICS only finite fields give possiblity
to compute minimal polynomial over someting different than
parameter of the domain. So when dealing with towers, we do
not have easy way to compute minimal polynomial of element of
the tower with respect to base field.

What could be done here? One possble way would equip each
stage of tower with structure of FramedAlgebra over base
field. Another way could use triangular systems, we could
specify a field extention via triangular system giving
equations for generators and do computations there (possibly
going to SimpleAlgebraicExtention via primitive element).

--
Waldek Hebisch

Ralf Hemmecke

unread,
Jul 13, 2023, 3:00:32 AM7/13/23
to fricas...@googlegroups.com
Hi Waldek,

On 13.07.23 01:09, Waldek Hebisch wrote:
> Partly motiveted by Ralf, partly by needs of various algorithms I
> looked at "algebraic" operations that we offer, and it seems that we
> miss or at least make hard to use some operation.

Indeed. Up to now I've mostly avoided these "hard" parts of FriCAS.
FriCAS' strength is not in formal manipulation of expressions that an
"every day user" usually deals with, but rather in the wealth of
opportunities to combine parametrized domains and packages. Of course,
that is a personal opinion. Nevertheless, it is commonly agreed that
FriCAS has a steep learning curve and seemingly people turn away before
they even give feedback of what they actually cannot achieve with
FriCAS, although perhaps the functionality is implemented, but not
easily reachable from a beginner's point of view.

> For example consider 'minimalPolynomial', we have that in
> SimpleAlgebraicExtention, FramedAlgebra, finite fields and in
> Complex. However, AFAICS only finite fields give possiblity to
> compute minimal polynomial over someting different than parameter of
> the domain. So when dealing with towers, we do not have easy way to
> compute minimal polynomial of element of the tower with respect to
> base field.
>
> What could be done here? One possble way would equip each stage of
> tower with structure of FramedAlgebra over base field. Another way
> could use triangular systems, we could specify a field extention via
> triangular system giving equations for generators and do computations
> there (possibly going to SimpleAlgebraicExtention via primitive
> element).

Using triangular sets reminds me of my attempts to implement dynamical
algebraic closures \cite{Steel_AlgebraicallyClosedFields_2002}.

Code is currently here. I haven't touched it recently, but it worked for
simple cases, became, however, unmanageably slow with several extensions
due to slow factorization of finite fields in FriCAS.

https://github.com/hemmecke/qeta/blob/master/src/iffts.spad
https://github.com/hemmecke/qeta/blob/master/src/ffalgclos.spad
https://github.com/hemmecke/qeta/blob/master/src/algclos.spad

I'd be happy, if that makes it in one form or the other into FriCAS.
Maybe, Waldek, it does not exactly solve your problem from above, but I
wanted to share my experiments.

Actually, also here we have domains with history, i.e. store some state
(namely the triangular set). Theoretically, it works like an algebraic
closure, but the more extensions are involved the more time it consumes,
so it might be necessary to implement a way to either (A) create a
second copy of such a domain that builds another triangular set or (B)
to clear the triangular set and build it anew for a different task.
(A) has the problem that one would have to add a parameter like the
minimal polynomial for SAE, but that is what I actually wanted to hide
in an implementation of an algebraic closure.
(B) is problematic, since it invalidates all elements of the domain that
have been created before.

Actually, there could be a third option. Elements carry a pointer to the
underlying triangular set of the extension, but then it's probably not
so easy to add elements that have different triangular sets attached.

Ralf

===================
@InProceedings{Steel_AlgebraicallyClosedFields_2002,
author = {Steel, Allan},
editor = {Fieker, Claus and Kohel, David R.},
title = {A New Scheme for Computing with Algebraically Closed
Fields},
booktitle = {Algorithmic Number Theory},
year = 2002,
publisher = {Springer Berlin Heidelberg},
address = {Berlin, Heidelberg},
pages = {491--505},
doi = {10.1007/3-540-45455-1_38},
abstract = {A new scheme is presented for computing with an
algebraic closure of the rational field. It avoids
factorization of polynomials over extension fields,
but gives the illusion of a genuine field to the
user. A technique of modular evaluation into a
finite field ensures that a unique genuine field is
simulated by the scheme and also provides fast
optimizations for some critical operations. Fast
modular matrix techniques are also used for several
non-trivial operations. The scheme has been
successfully implemented within the Magma Computer
Algebra System.},
isbn = {978-3-540-45455-7}
}

Waldek Hebisch

unread,
Jul 13, 2023, 4:36:38 PM7/13/23
to fricas...@googlegroups.com
On Thu, Jul 13, 2023 at 09:00:29AM +0200, Ralf Hemmecke wrote:
>
> Using triangular sets reminds me of my attempts to implement dynamical
> algebraic closures \cite{Steel_AlgebraicallyClosedFields_2002}.
>
> Code is currently here. I haven't touched it recently, but it worked for
> simple cases, became, however, unmanageably slow with several extensions
> due to slow factorization of finite fields in FriCAS.
>
> https://github.com/hemmecke/qeta/blob/master/src/iffts.spad
> https://github.com/hemmecke/qeta/blob/master/src/ffalgclos.spad
> https://github.com/hemmecke/qeta/blob/master/src/algclos.spad
>
> I'd be happy, if that makes it in one form or the other into FriCAS.
> Maybe, Waldek, it does not exactly solve your problem from above, but I
> wanted to share my experiments.

Tried it a bit. First, CyklotomicPolynomialPackage is now removed,
need to replace it by CyklotomicUtilities. Second, it seems that
you are factoring numbers under roots:

(32) ax := xx^2 - 6

2
(32) ? - 6
Type: SparseUnivariatePolynomial(DynamicAlgebraicNumber)
(33) -> r := rootsOf(ax)
[:> , DACFF rootOf poly, ?^2+2305843009213693919]

(33) [r1 r7, - r1 r7]
Type: List(DynamicAlgebraicNumber)

So, code is smart enough to notice that square root of 3 already
appeared, but expressions are more complicated then they would
be otherwise (without reusing previous root this doubles degree
of exention). And when there are four factors we get:

(36) -> ax := xx^2 - 11*13*19*43

2
(36) ? - 116831
Type: SparseUnivariatePolynomial(DynamicAlgebraicNumber)
(37) -> r := rootsOf(ax)
[:> , DACFF rootOf poly, ?^2+2305843009213693910]
[:> , DACFF rootOf poly, ?^2+2305843009213693908]
[:> , DACFF rootOf poly, ?^2+2305843009213693902]
[:> , DACFF rootOf poly, ?^2+2305843009213693878]

(37) [r10 r11 r12 r13, - r10 r11 r12 r13]
Type: List(DynamicAlgebraicNumber)

so there are three extra roots.

(5) -> ax := xx^3 - 1/2305843009213693921

3 1
(5) ? - -------------------
2305843009213693921
Type: SparseUnivariatePolynomial(DynamicAlgebraicNumber)
(6) -> r := rootsOf(ax)

>> Error detected within library code:
IFFTS: inv: cannot invert zero element

Of course, chance that somebody will randomly find your prime is
negligible. But there is nontrivial cost of using 61-bit prime
and with more practical primes chance of error while low is no
longer negligible.

BTW: Steel et all write that they force full factoring of
defining polynomials in case of failure. But AFAICS one
can simply choose different prime and choose roots in
finite field in a way which is compatible with current
defining polynomials.

> Actually, also here we have domains with history, i.e. store some state
> (namely the triangular set). Theoretically, it works like an algebraic
> closure, but the more extensions are involved the more time it consumes, so
> it might be necessary to implement a way to either (A) create a second copy
> of such a domain that builds another triangular set or (B) to clear the
> triangular set and build it anew for a different task.
> (A) has the problem that one would have to add a parameter like the minimal
> polynomial for SAE, but that is what I actually wanted to hide in an
> implementation of an algebraic closure.
> (B) is problematic, since it invalidates all elements of the domain that
> have been created before.

Either way, it seems problematic to store elements of such domains
in files. And even if we ignore problems above mutable domains
cause various troubles. Borderline case is Kernel: kernels depend
on kernel cache which is separate from Kernel. A kernel carry
enough information to re-create it after kernel cache is cleared.
And we have code to mark kernels as needing re-insertion in cache.

> Actually, there could be a third option. Elements carry a pointer to the
> underlying triangular set of the extension, but then it's probably not so
> easy to add elements that have different triangular sets attached.

As long as you do not need to simplify just append both (or maybe
intelive). Whith some smarts unused variables (generators) should have
little cost.

--
Waldek Hebisch

Ralf Hemmecke

unread,
Jul 13, 2023, 5:31:30 PM7/13/23
to fricas...@googlegroups.com
> Tried it a bit. First, CyklotomicPolynomialPackage is now removed,
> need to replace it by CyklotomicUtilities. Second, it seems that you
> are factoring numbers under roots:

> (32) ax := xx^2 - 6
>
> 2 (32) ? - 6 Type:
> SparseUnivariatePolynomial(DynamicAlgebraicNumber) (33) -> r :=
> rootsOf(ax) [:> , DACFF rootOf poly, ?^2+2305843009213693919]
>
> (33) [r1 r7, - r1 r7] Type: List(DynamicAlgebraicNumber)
>
> So, code is smart enough to notice that square root of 3 already
> appeared, but expressions are more complicated then they would be
> otherwise (without reusing previous root this doubles degree of
> exention).

I know. However, my main concern was that I wanted to recognise that
sqrt(2)*sqrt(3)-sqrt(6) or sqrt(2)*sqrt(3)+sqrt(6) is 0 (case depending
on internal choice) If the user enters sqrt(6) before sqrt(3) then the
machinery would have to recognise this and change accordingly. I think
Magma does it in the same way.

> And when there are four factors we get:
>
> (36) -> ax := xx^2 - 11*13*19*43
...
> so there are three extra roots.

Yes, I realized that, too. But as I said, main concern is to be able to
reliably figure out algebraic dependencies. Note that in FriCAS

(5) -> zero?(sqrt(2)*sqrt(3)-sqrt(6))

(5) false

(6) -> zero?(sqrt(2)*sqrt(3)+sqrt(6))

(6) false

That would allow me to divide by this expression. I want an algebraic
closure and no surprises.

> (5) -> ax := xx^3 - 1/2305843009213693921
>
> 3 1 (5) ? - ------------------- 2305843009213693921
> Type: SparseUnivariatePolynomial(DynamicAlgebraicNumber) (6) -> r :=
> rootsOf(ax)
>
>>> Error detected within library code:
> IFFTS: inv: cannot invert zero element

Oh, I stopped at some point with this implementation. The fixed prime
should certainly change if it becomes "unlucky". But I have no good
solution for this "changing machinery" yet.

> Either way, it seems problematic to store elements of such domains in
> files.

Sure. One would also have to store the triangular set with each element.
I actually never thought of storing theys numbers on disk.

> And even if we ignore problems above mutable domains cause various
> troubles.

Exactly. But what could be a suitable solution for an algebraic closure
with proper zero recognition? I am not at all in much favour of a domain
that has an internal state. Although, if it were computationally not so
costly, it would not be a big problem for
DynamicAlgebraicClosureFiniteField to be different internally in
different sessions.

> Borderline case is Kernel: kernels depend on kernel cache
> which is separate from Kernel.

Yes, but kernels in cache can have algebraic dependencies and this is
not covered by current code.

>> Actually, there could be a third option. Elements carry a pointer
>> to the underlying triangular set of the extension, but then it's
>> probably not so easy to add elements that have different triangular
>> sets attached.
>
> As long as you do not need to simplify just append both (or maybe
> intelive). Whith some smarts unused variables (generators) should
> have little cost.

I don't exactly understand what you mean, but it sounds like current
AlgebraicNumber where dependencies are not detected.

Ralf


Waldek Hebisch

unread,
Jul 13, 2023, 6:55:17 PM7/13/23
to fricas...@googlegroups.com
On Thu, Jul 13, 2023 at 11:31:27PM +0200, Ralf Hemmecke wrote:
> > Tried it a bit. First, CyklotomicPolynomialPackage is now removed, need
> > to replace it by CyklotomicUtilities. Second, it seems that you
> > are factoring numbers under roots:
>
> > (32) ax := xx^2 - 6
> >
> > 2 (32) ? - 6 Type:
> > SparseUnivariatePolynomial(DynamicAlgebraicNumber) (33) -> r :=
> > rootsOf(ax) [:> , DACFF rootOf poly, ?^2+2305843009213693919]
> >
> > (33) [r1 r7, - r1 r7] Type: List(DynamicAlgebraicNumber)
> >
> > So, code is smart enough to notice that square root of 3 already
> > appeared, but expressions are more complicated then they would be
> > otherwise (without reusing previous root this doubles degree of
> > exention).
>
> I know. However, my main concern was that I wanted to recognise that
> sqrt(2)*sqrt(3)-sqrt(6) or sqrt(2)*sqrt(3)+sqrt(6) is 0 (case depending
> on internal choice) If the user enters sqrt(6) before sqrt(3) then the
> machinery would have to recognise this and change accordingly.

Yes, AFAICS this is the whole point of this domain, regardless
of factoring roots or not factoring roots the machinery will
discover dependencies.

> I think
> Magma does it in the same way.

Maybe.

> > And when there are four factors we get:
> >
> > (36) -> ax := xx^2 - 11*13*19*43
> ...
> > so there are three extra roots.
>
> Yes, I realized that, too. But as I said, main concern is to be able to
> reliably figure out algebraic dependencies. Note that in FriCAS
>
> (5) -> zero?(sqrt(2)*sqrt(3)-sqrt(6))
>
> (5) false
>
> (6) -> zero?(sqrt(2)*sqrt(3)+sqrt(6))
>
> (6) false
>
> That would allow me to divide by this expression. I want an algebraic
> closure and no surprises.

The only surprise is with printing:

(12) -> rl1 := rootsOf(xx^2 - p1*p2)
[:> , DACFF rootOf poly, ?^2+110412165131911888]
[:> , IFFTS extendBy! f, ?^2+110412165131911888]
[:> , IFFTS lvlsz, [5316911983139663348652961669872354241, 2305843009213693921]]
[:> , IFFTS n, 2]
[:> , IFFTS n, -2]
[:> , IFFTS n, 0]

(12) [r1, - r1]
Type: List(DynamicAlgebraicNumber)
(13) -> rl2 := rootsOf(xx^2 - p1)
[:> , DACFF rootOf poly, ?^2+1457092405402532485]

(13) [r2, - r2]
Type: List(DynamicAlgebraicNumber)
(14) -> rl3 := rootsOf(xx^2 - p2)
[:> , DACFF rootOf poly, ?^2+441130959790535368]

(14) [r3, - r3]
Type: List(DynamicAlgebraicNumber)
(15) -> rl1.1 + rl2.1*rl3.1

(15) r2 r3 + r1
Type: DynamicAlgebraicNumber
(16) -> rl1.1 - rl2.1*rl3.1

(16) - r2 r3 + r1
Type: DynamicAlgebraicNumber
(17) -> 1/(rl1.1 - rl2.1*rl3.1)

1
(17) ------------------------------------------- r1
2000000000000000000781800000000000000000702
Type: DynamicAlgebraicNumber
(18) -> 1/(rl1.1 + rl2.1*rl3.1)

>> Error detected within library code:
cannot invert zero element

Instead of factoring you could run zero test before printing, to
print zero as zero.

And of course factoring small numbers and not factoring bigger
ones is somewhat a surprize.

> Exactly. But what could be a suitable solution for an algebraic closure
> with proper zero recognition? I am not at all in much favour of a domain
> that has an internal state. Although, if it were computationally not so
> costly, it would not be a big problem for DynamicAlgebraicClosureFiniteField
> to be different internally in different sessions.
>
> > Borderline case is Kernel: kernels depend on kernel cache
> > which is separate from Kernel.
>
> Yes, but kernels in cache can have algebraic dependencies and this is not
> covered by current code.

True.

> > > Actually, there could be a third option. Elements carry a pointer
> > > to the underlying triangular set of the extension, but then it's
> > > probably not so easy to add elements that have different triangular
> > > sets attached.
> >
> > As long as you do not need to simplify just append both (or maybe
> > intelive). Whith some smarts unused variables (generators) should
> > have little cost.
>
> I don't exactly understand what you mean, but it sounds like current
> AlgebraicNumber where dependencies are not detected.

Why? AFAICS appending triangular sets gives you valid triangular
set. And if you use the same prime for both sets, then compatibility
conditions should hold. My understanding is that appending is
equivalent to re-running comands used to generate second set
in session which created first set, but making sure to never
reference the first set (so access via root number must use
usjusted index). Only "mixed" arithmetic or zero test is
supposed to cause interation between polynomials in the two
subsets. Of course, your factoring code breaks the rules, but
as long as it is limited to integer roots this should be
easy to handle.

Concerning using different primes, there is consistency problem
here, different prime could renumber roots so it looks tricky
to ensure that inequality tests decided by finite field give
the same results.

--
Waldek Hebisch

Waldek Hebisch

unread,
Jul 13, 2023, 8:57:29 PM7/13/23
to fricas...@googlegroups.com
On Thu, Jul 13, 2023 at 09:00:29AM +0200, Ralf Hemmecke wrote:
>
> Code is currently here. I haven't touched it recently, but it worked for
> simple cases, became, however, unmanageably slow with several extensions
> due to slow factorization of finite fields in FriCAS.
>
> https://github.com/hemmecke/qeta/blob/master/src/iffts.spad
> https://github.com/hemmecke/qeta/blob/master/src/ffalgclos.spad
> https://github.com/hemmecke/qeta/blob/master/src/algclos.spad

Concerning slow factorization: one significant reason for slow
factorization is that operations in general finite fields are
slow. And AFAICS finite fields represented by triangular sets
are slower than other slow variants.

Another thing is that you use expensive operations when faster
alternatives would do. For example, factoring

x^17 - 11*13*17*19

over FiniteField(16777777,17) takes 22.89 sec, factoring over
FiniteField(2305843009213693921,17) take 150.71, finding
root by powering can be done probably in few milliseconds.

AFAICS DynamicAlgebraicNumber introduces extra extention
of degree 2 and tries to factor cyclotomic polynomial.
Roots of cyclotomic polynomial are already in base field
and can can be found by factoring in about 40 milliseconds,
using powering it should be several time faster. On
much slower machine new factoring code needs about 1.2
milliseconds to factor cyclotomic polynomial over
PF(16777777).

Anyway, to be faster DynamicAlgebraicNumber should first
factor polynomial over smallest possible field (that is
field containing its coefficients). After that one can
compute which field is needed to split the polynomial
and one can do splitting in this field. Final part
is embedding splitting field in your final field, this
can be done once and can be done using roots which is
cheaper than factoring. Some of those improvements
could be done in factoring code, but in general
DynamicAlgebraicNumber or DynamicAlgebraicClosureFiniteField
has more information. For example factoring code could
check at resonable cost that polynomial has coefficients
in base field. But if coefficients are in intermediate
fields check in factorizer would be much more complicated.
Also, factorizer could compute emeddings on the fly,
but is is cheaper to create them once and reuse.

Concerning factorization, new factorizer (in ffact.spad)
can now handle general finite fields. For high algebraic
extentions it still need algorithmic improvement, but
but it is not far from best known algorithms. What is
most needed are fast low level routines like polynomial
mutiplication, gcd and matrix-vector mutiplication.

One thing to point out is that dealing with mutliple algebraic
extentions is inherently exponential. One can hope to make
easy cases fast, but general case are going to be expensive.

--
Waldek Hebisch

Waldek Hebisch

unread,
Jul 15, 2023, 6:46:55 PM7/15/23
to fricas...@googlegroups.com
On Fri, Jul 14, 2023 at 02:57:16AM +0200, Waldek Hebisch wrote:
> On Thu, Jul 13, 2023 at 09:00:29AM +0200, Ralf Hemmecke wrote:
> >
> > Code is currently here. I haven't touched it recently, but it worked for
> > simple cases, became, however, unmanageably slow with several extensions
> > due to slow factorization of finite fields in FriCAS.
> >
> > https://github.com/hemmecke/qeta/blob/master/src/iffts.spad
> > https://github.com/hemmecke/qeta/blob/master/src/ffalgclos.spad
> > https://github.com/hemmecke/qeta/blob/master/src/algclos.spad
>
> Concerning slow factorization: one significant reason for slow
> factorization is that operations in general finite fields are
> slow. And AFAICS finite fields represented by triangular sets
> are slower than other slow variants.
>
> Another thing is that you use expensive operations when faster
> alternatives would do. For example, factoring
>
> x^17 - 11*13*17*19
>
> over FiniteField(16777777,17) takes 22.89 sec, factoring over
> FiniteField(2305843009213693921,17) take 150.71, finding
> root by powering can be done probably in few milliseconds.

I have now commited code that switches univariate factorization
over finite fields to new factorizer. Now the same polynomial
over FiniteField(16777777,17) takes 6.48, over
FiniteField(2305843009213693921,17) takes 35.22.

Improvement is mostly due to better algorithm, for those fields
both old and new factorizer use generic arithmetic in finite fields.

Improvement is possible from faster arithmetic routines. But
large prime like 2305843009213693921 is more expensive than
smaller primes. Currently probably best approach would handle
2305843009213693921 via multimodular method. It needs about
5 machine-handled moduli so rough estimate is that it is going
to be about 5-8 time slower.

--
Waldek Hebisch

Waldek Hebisch

unread,
Jul 20, 2023, 12:01:03 PM7/20/23
to fricas...@googlegroups.com
As an extra comment: main cost of factorization is modular
exponentiotion. For field of cardinality p^k, that is
algebraic extentions of prime field our current method
has cost proportional to length of p^k, that is k*length(p).
This can be lowered to length(p) + other term which is smaller
by using modular composition. But for large p and moderate
k possible speedup is limited.

--
Waldek Hebisch
Reply all
Reply to author
Forward
0 new messages