A Sage interface for FGb (Gröbner bases)

1,203 views
Skip to first unread message

Markus Wageringel

unread,
Nov 21, 2018, 5:43:01 PM11/21/18
to sage-devel
Hi everyone.

I created a Sage wrapper for the C interface of FGb, which makes it easy to call FGb from within Sage. The sources are available on Github [1] and can be installed as a Python package into Sage:



FGb is a C-library by J. C. Faugère for computing Gröbner bases and supposedly it is one of the faster implementations that exist. It is included with Maple [2]. FGb is closed source, but comes with a C interface that is freely distributed for academic use. Some of the features:

• The computations run in parallel. (This only seems to work for computations over finite fields.)
• Elimination/block orders are supported.
• It runs on Linux and Mac. (There seem to be some issues, though. I could not get FGb to work on my Ubuntu machine. It fails with an "Illegal instruction" error.)


In my Sage interface, I implemented just two functions: computing Gröbner bases and elimination ideals. Supposedly, the FGb C-library supports other functionality like computing Hilbert polynomials, but that part of the library is not documented very well, so it does not make sense to try to create wrappers for that. The focus is finding a Gröbner basis which, once computed, can be used by Sage for further computations.

I just wanted to share this. Maybe it is useful for someone.

Markus


Martin R. Albrecht

unread,
Nov 22, 2018, 3:49:06 AM11/22/18
to sage-...@googlegroups.com
Hi Markus,

Works without a hitch here (Debian/testing, Sage 8.4). I have been planning on doing this over the years but never got around to it, so really cool to see that you did. Nice work!

Cheers,
Martin
--

_pgp: https://keybase.io/martinralbrecht
_www: https://martinralbrecht.wordpress.com
_jab: martinr...@jabber.ccc.de
_otr: 47F43D1A 5D68C36F 468BAEBA 640E8856 D7951CCF

parisse

unread,
Nov 22, 2018, 3:53:43 AM11/22/18
to sage-devel
Did you make some comparisons with Giac ?

Some benchmarks from Roman Pearce and my own tests, about 2 years old.

Roman used an Intel Core i5 4570 3.2 GHz with 8 GB DDR3-1600 running 64-bit Linux (4 cores, 4 threads, 6M cache, turbo 3.2 -> 3.6GHz). I also checked Giac on my Mac (Core i5 2.9Ghz, 2 cores, 4 threads)

Groebner bases mod p are computed for the fastest machine prime supported by each software.

For mgb and Singular p=2^31-1, for Magma p=11863279, for FGb p=65521, and for Giac p=16777213.

Zp        mgb 8b    Magma     FGb     Giac      Singular 4.0.2       Giac on my Mac 1 thread [2 threads]
cyclic8        0.715     0.970     2.050     3.123      37.490             3.13 [2.61]
cyclic9        41.157     48.860    221.890 179.689  7113.800       282-287 [195]
katsura12    2.787     3.680    37.830     24.863      651.790       33 [23.3]
katsura13    16.269     21.080    314.640 184.403  5187.580       247.6 [167.5]
bayes148     15.754     16.520    891.400 1206.892 5.160               97.5 [86]
gametwo7     4.701     6.070    24.220     68.336      485.750       48.2 [38.9]
jason210     2.789     3.180    32.990     12.570     1.790           10.8 [10.5]
mayr42         36.755     30.780    196.530 4087.811 185.760       146.6 [142.6]
yang1         17.347     15.900    494.440      -   90.260           193.8 [191]

Q        mgb 8b    Magma     FGb     Giac     Singular       
cyclic7        0.979     0.640    1.750    0.951    16.288 (Mac 19)    1.61 [1.10]
cyclic8        12.290     14.750    42.690    25.892    685.352?     49.5 [30.0] (this one has been improved recently)
katsura10    4.740     2.270    7.870    3.858    203.529?     5.17 [3.79]
katsura11    34.180     17.130    50.220    24.437    1635.524?    41.8 [27.5]
alea6         50.620     114.680    88.470    52.433    1273.341     72.8 [57.3]
eco12         21.140     31.030    62.580    22.765    3333.936     42.4 [29.0]
jason210     19.530     4.790    62.270    23.831    47.273??     45.3 [29.3]
noon9         42.330     45.430    157.480    126.898    1483.740     121 [113]


Thierry

unread,
Nov 22, 2018, 4:11:39 AM11/22/18
to 'Martin R. Albrecht' via sage-devel
Hi,

On Thu, Nov 22, 2018 at 08:48:58AM +0000, 'Martin R. Albrecht' via sage-devel wrote:
> Hi Markus,
>
> Works without a hitch here (Debian/testing, Sage 8.4). I have been planning on doing this over the years but never got around to it, so really cool to see that you did. Nice work!

It was on my todo list for a while too, since our implementations are very
slow. Here "very" means "prohibitively", since some systems can not be
solved with Sage in decent time (via Singular), while FGb could solve them
quickly. A colleague of mine had to do a lot of tricks to solve a research
problem with Sage, and was disapointed when he discovered that FGb could
have saved him hours of hard work during a seminar, where a Maple user
showed him the Groebner basis. See also the related questions on
ask.sagemath.org for concrete examples.

Unfortunately, the fact that is is neither free-software nor open-source
made it lower on my todo list. I wonder whether it could be possible to
kindly ask J.C. Faugere to free his code, especially since his work on
this is founded by the (french) public service.

Thanks a lot for the work !

Ciao,
Thierry
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
> To post to this group, send email to sage-...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.

parisse

unread,
Nov 22, 2018, 4:43:22 AM11/22/18
to sage-devel


Le jeudi 22 novembre 2018 10:11:39 UTC+1, Thierry (sage-googlesucks@xxx) a écrit :
Hi,


It was on my todo list for a while too, since our implementations are very
slow. Here "very" means "prohibitively", since some systems can not be
solved with Sage in decent time (via Singular), while FGb could solve them
quickly.

That's not correct, Sage has a wrapper to Giac which has fast GB code and is also open-source. I really wonder why it seems to be ignored.


Friedrich Wiemer

unread,
Nov 22, 2018, 5:07:47 AM11/22/18
to sage-devel
Cool Markus, thanks a lot for sharing this! :)


Am Donnerstag, 22. November 2018 10:11:39 UTC+1 schrieb Thierry (sage-googlesucks@xxx):
Unfortunately, the fact that is is neither free-software nor open-source
made it lower on my todo list. I wonder whether it could be possible to
kindly ask J.C. Faugere to free his code, especially since his work on
this is founded by the (french) public service.

That would be really nice!

Dima Pasechnik

unread,
Nov 22, 2018, 5:28:32 AM11/22/18
to sage-...@googlegroups.com
From Sage-centric point of view, one reason for this is that
commutative algebra in Sage is a mess, and the multivariate polynomial
backend hardcodes Singular.

I (and many others - e.g. people really want Macaulay2 advanced
functionality, e.g. resolutions, available) really would like to see
an improvement here.

Where and how do we even start fixing this?

mmarco

unread,
Nov 22, 2018, 5:36:02 AM11/22/18
to sage-devel
I would be interested in that too, but that sounds like a complex task. I think one or more GSoC projects could fit into that... and maybe also some thematic Sage Days.

Dima Pasechnik

unread,
Nov 22, 2018, 5:52:42 AM11/22/18
to sage-...@googlegroups.com
On Thu, Nov 22, 2018 at 10:36 AM mmarco <mma...@unizar.es> wrote:
>
> I would be interested in that too, but that sounds like a complex task. I think one or more GSoC projects could fit into that... and maybe also some thematic Sage Days.

It's not a GSoC project, as it's 1st of all redesigning the
corresponding class/category structures to make it meaningful. I don't
know enough about Sage's category framework (and only barely enough
commutative algebra, to be honest)
to be a leader on this...

Martin R. Albrecht

unread,
Nov 23, 2018, 5:46:07 AM11/23/18
to sage-...@googlegroups.com
Hi,

speaking of Giac (sorry, if this should rather be on sage-support or off-list):

Can I get the degree reached during the computation and the sizes of the matrices considered out somehow?

Rationale: In crypto, we usually want to see if the system under consideration behaves like a random-system despite not being one. One way of testing this is to compute the degree of semi-regularity for a semi-regular sequence and compare with the degree reached during a GB computation of an actual instance.

Cheers,
Martin

Markus Wageringel

unread,
Nov 23, 2018, 5:53:51 PM11/23/18
to sage-devel
Thanks for the feedback everyone.


Am Donnerstag, 22. November 2018 09:53:43 UTC+1 schrieb parisse:
Did you make some comparisons with Giac ?

Some benchmarks from Roman Pearce and my own tests, about 2 years old.

I have not done any comparisons, mainly because I cannot do anything about the performance of FGb anyway. Moreover, to my knowledge, Giac does not support elimination orders (at least the Sage interface does not), which made it less suitable for my use cases.

I remember having seen those benchmarks before, but I cannot find them anymore. If you send a copy of the polynomial systems, I can make some rough comparisons of the options available within Sage.

Markus

john_perry_usm

unread,
Nov 24, 2018, 2:25:13 PM11/24/18
to sage-devel
I walk into this discussion with some hesitancy, but Christian Eder has developed a rather efficient F4 algorithm. [1] I know it works and is quite fast, though I haven't compared it to the implementations mentioned above. Unfortunately, I haven't heard from him in a while after he went off to Iran for a few weeks, and he doesn't seem to have updated his site since then, either.

Is integrating Eder's project something a group might be interested in doing at [2]? I had planned to apply to work on integrating a similar project at [2] (a different sort of F4-style Gröbner basis algorithm [3,4]) but perhaps [1] would be a good bet since there's no doubt about it and Eder spent at least a year in Paris working with Faugère.

regards



Dima Pasechnik

unread,
Nov 24, 2018, 4:51:04 PM11/24/18
to sage-...@googlegroups.com
On Sat, 24 Nov 2018 at 19:25, john_perry_usm <john....@usm.edu> wrote:
I walk into this discussion with some hesitancy, but Christian Eder has developed a rather efficient F4 algorithm. [1] I know it works and is quite fast, though I haven't compared it to the implementations mentioned above. Unfortunately, I haven't heard from him in a while after he went off to Iran for a few weeks, and he doesn't seem to have updated his site since then, either.

was updated 5 days ago, so he must be just busy...


Is integrating Eder's project something a group might be interested in doing at [2]? I had planned to apply to work on integrating a similar project at [2] (a different sort of F4-style Gröbner basis algorithm [3,4]) but perhaps [1] would be a good bet since there's no doubt about it and Eder spent at least a year in Paris working with Faugère.

regards



[4] https://dl.acm.org/citation.cfm?id=3087643

On Wednesday, November 21, 2018 at 4:43:01 PM UTC-6, Markus Wageringel wrote:
Hi everyone.

I created a Sage wrapper for the C interface of FGb, which makes it easy to call FGb from within Sage. The sources are available on Github [1] and can be installed as a Python package into Sage:



FGb is a C-library by J. C. Faugère for computing Gröbner bases and supposedly it is one of the faster implementations that exist. It is included with Maple [2]. FGb is closed source, but comes with a C interface that is freely distributed for academic use. Some of the features:

• The computations run in parallel. (This only seems to work for computations over finite fields.)
• Elimination/block orders are supported.
• It runs on Linux and Mac. (There seem to be some issues, though. I could not get FGb to work on my Ubuntu machine. It fails with an "Illegal instruction" error.)


In my Sage interface, I implemented just two functions: computing Gröbner bases and elimination ideals. Supposedly, the FGb C-library supports other functionality like computing Hilbert polynomials, but that part of the library is not documented very well, so it does not make sense to try to create wrappers for that. The focus is finding a Gröbner basis which, once computed, can be used by Sage for further computations.

I just wanted to share this. Maybe it is useful for someone.

Markus


parisse

unread,
Nov 24, 2018, 5:11:26 PM11/24/18
to sage-devel


Le vendredi 23 novembre 2018 23:53:51 UTC+1, Markus Wageringel a écrit :
Thanks for the feedback everyone.

Am Donnerstag, 22. November 2018 09:53:43 UTC+1 schrieb parisse:
Did you make some comparisons with Giac ?

Some benchmarks from Roman Pearce and my own tests, about 2 years old.

I have not done any comparisons, mainly because I cannot do anything about the performance of FGb anyway. Moreover, to my knowledge, Giac does not support elimination orders (at least the Sage interface does not), which made it less suitable for my use cases.


Giac supports double revlex ordering, this is the order used by the eliminate command of Giac. Geogebra has many examples of eliminate commands there https://dev.geogebra.org/trac/browser/trunk/geogebra/giac/src/test

I remember having seen those benchmarks before, but I cannot find them anymore. If you send a copy of the polynomial systems, I can make some rough comparisons of the options available within Sage.

parisse

unread,
Nov 24, 2018, 5:28:21 PM11/24/18
to sage-devel


Le vendredi 23 novembre 2018 11:46:07 UTC+1, Martin Albrecht a écrit :
Hi,

speaking of Giac (sorry, if this should rather be on sage-support or off-list):

Can I get the degree reached during the computation and the sizes of the matrices considered out somehow?

export GIAC_DEBUG=2
before calling giac will display some information on the intermediate computations

Jean-Pierre Flori

unread,
Nov 26, 2018, 5:35:48 AM11/26/18
to sage-devel
For GB addict, yet another open source implementation:

Bill Hart

unread,
Nov 26, 2018, 11:16:16 AM11/26/18
to sage-devel


On Saturday, 24 November 2018 20:25:13 UTC+1, john_perry_usm wrote:
I walk into this discussion with some hesitancy, but Christian Eder has developed a rather efficient F4 algorithm. [1] I know it works and is quite fast, though I haven't compared it to the implementations mentioned above. Unfortunately, I haven't heard from him in a while after he went off to Iran for a few weeks, and he doesn't seem to have updated his site since then, either.

I assure you he is quite alive. I saw him a few minutes ago! 

From his recent talks, his implementation is nowadays more than competitive.

parisse

unread,
Nov 26, 2018, 12:03:54 PM11/26/18
to sage-devel


Le lundi 26 novembre 2018 17:16:16 UTC+1, Bill Hart a écrit :



From his recent talks, his implementation is nowadays more than competitive.

I confirm that his timings are very good: for example almost 3 times faster than Giac for cyclic9 modular. On the other hand, the implementation seems not as complete: a quick look indicates that monomial ordering supported are revlex and lex (no elimination, this is probably not hard to add), and coefficients must belong to Z/pZ (support for Q would require more work...)

Simon King

unread,
Nov 26, 2018, 4:04:56 PM11/26/18
to sage-...@googlegroups.com
Hi!

On 2018-11-26, parisse <bernard...@ujf-grenoble.fr> wrote:
> not as complete: a quick look indicates that monomial ordering supported
> are revlex and lex (no elimination, this is probably not hard to add), and
> coefficients must belong to Z/pZ (support for Q would require more work...)

What is your definition of "elimination order"? If I understand
correctly, lex *is* an elimination order.

Best regards,
Simon

parisse

unread,
Nov 27, 2018, 5:27:13 AM11/27/18
to sage-devel
I meant a more efficient elimination order like double revlex.

Le lundi 26 novembre 2018 22:04:56 UTC+1, Simon King a écrit :
Hi!

Simon King

unread,
Nov 27, 2018, 6:00:16 AM11/27/18
to sage-...@googlegroups.com
Hi Bernard,

On 2018-11-27, parisse <bernard...@ujf-grenoble.fr> wrote:
> I meant a more efficient elimination order like double revlex.

Actually I've never heard of that. The only reference I could find with
duckduckgo was the phrase
"Fast Gröbner basis f4 algorithm (revlex order and double
revlex for elimination), fast rational univariate
representation."
from the slides of your talk entitled "Giac/Xcas short presentation
Interactions with PARI/GP" held in January 2016.

Is it simply a block order where both blocks are revlex? If that's not
the case: Could you point me to an article where "double revlex" is
actually defined and where the benefits are pointed out?

Best regards,
Simon

parisse

unread,
Nov 27, 2018, 9:08:37 AM11/27/18
to sage-devel
Yes, it's exactly that, an efficient order for eliminate. In fact, I don't know exactly what orders are supported in gb, I just guess from f4.c:
       ...
        printf("field characteristic   %11d\n", fc);
        if (mo == 0) {
            printf("monomial order                 DRL\n");
        }
        if (mo == 1) {
            printf("monomial order                 LEX\n");
        }
        if ((mo != 0) && (mo != 1)) {
            printf("monomial order           DONT KNOW\n");
        }
        printf("linear algebra option  %11d\n", laopt);
    ....

john_perry_usm

unread,
Nov 27, 2018, 4:33:56 PM11/27/18
to sage-devel
I've been back in touch with Christian Eder (thanks for assuring me he's alive, Bill :-)) and he's happy to support an attempt to get the code into Sage. He's working on extending the code to other fields & the like.

I will definitely apply for the afore-mentioned coding spring, to add at least this system to Sage, maybe the other code I mentioned, too. If anyone else is interested in this, I would be happy to have collaborators; just email me privately, I guess? Again, the page is here [1].



On Wednesday, November 21, 2018 at 4:43:01 PM UTC-6, Markus Wageringel wrote:

Markus Wageringel

unread,
Dec 5, 2018, 5:44:43 PM12/5/18
to sage-devel
Am Samstag, 24. November 2018 23:11:26 UTC+1 schrieb parisse:

Giac supports double revlex ordering, this is the order used by the eliminate command of Giac. Geogebra has many examples of eliminate commands there https://dev.geogebra.org/trac/browser/trunk/geogebra/giac/src/test

Nice. It would probably be good to have this function in the Sage interface for giac, as well. Is it also possible to obtain the full Gröbner basis with respect to the revlex/revlex block order, but without elimination?



Meanwhile, Roman Pearce's website [5] is back online and I used those polynomial systems to compare the Sage interfaces libsingular, giacpy_sage and fgb_sage. I created a gist of the code [6].

Versions of libraries used:
• Sage 8.5.beta4
• libSingular-4.1.1
• giacpy_sage-0.6.6, giac-1.4.9.45.p4
• fgb_sage-0.1, fgb-20150719 (14538, 14539)

Primes used: singular: 2^31-1, giac: 16777213, fgb: 65521.

Machine: CentOS 7.5.1804, Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz, 32 cores, 196 G memory.
Number of threads in parentheses. Computation time in seconds.

ℤ/p           | singular (1) | giac (4)      giac (16)    | fgb (4)       fgb (16)
      cyclic8 |        62.01 |         7.71          4.88 |         1.86          1.38
      cyclic9 |     11519.16 |       258.85        191.30 |       112.31         55.37
    katsura11 |       107.91 |        12.62         11.10 |         4.04          2.43
    katsura12 |      1090.37 |        67.07         60.25 |        23.09         12.59
     bayes148 |         7.19 |        92.63         92.15 |       571.28        471.59
     gametwo7 |       934.69 |       122.65        116.37 |        20.78         14.30
     jason210 |         2.67 |        23.71         23.39 |        24.58         22.44
       mayr42 |       124.74 |       169.94        166.56 |       139.12        111.08
        yang1 |        49.79 |          [3]           [3] |       257.96        177.65

ℚ             | singular (1) | giac (4)      giac (16)    | fgb (1)
      cyclic7 |          [1] |        10.71         10.62 |         4.20
      cyclic8 |          [2] |       120.44        205.57 |        77.22
     katsura9 |        19.19 |        27.30         27.23 |         3.04
    katsura10 |       236.62 |       241.32        238.23 |        18.58
        alea6 |     35724.62 |       578.94        548.39 |   [4] 331.64
        eco12 |          [1] |      4748.24       4811.32 |       102.06
     jason210 |        13.43 |        65.24         83.82 |        49.38
        noon9 |       132.82 |      3146.97       3159.49 |       136.55

I am a bit surprised about some of the Singular results being a lot worse than reported in [5], cyclic7 in particular. Perhaps starting this computation with some different options can help here.

[1] Aborted after 13.5 hours.
[2] Skipped.
[3] Error, problem too large.
[4] Excessive memory usage of about 300 G; computed on a different machine.

Bill Hart

unread,
Dec 6, 2018, 5:41:23 AM12/6/18
to sage-devel


On Wednesday, 5 December 2018 23:44:43 UTC+1, Markus Wageringel wrote:

I am a bit surprised about some of the Singular results being a lot worse than reported in [5], cyclic7 in particular. Perhaps starting this computation with some different options can help here.

<SNIP>
 

All the other systems are using modular methods here, so Roman use the modstd library (the command is modStd) in Singular to get those timings. Indeed cyclic7 over Q takes about 20s on my laptop in Singular using this approach. 

parisse

unread,
Dec 6, 2018, 2:00:52 PM12/6/18
to sage-devel


Le mercredi 5 décembre 2018 23:44:43 UTC+1, Markus Wageringel a écrit :
Am Samstag, 24. November 2018 23:11:26 UTC+1 schrieb parisse:

Giac supports double revlex ordering, this is the order used by the eliminate command of Giac. Geogebra has many examples of eliminate commands there https://dev.geogebra.org/trac/browser/trunk/geogebra/giac/src/test

Nice. It would probably be good to have this function in the Sage interface for giac, as well. Is it also possible to obtain the full Gröbner basis with respect to the revlex/revlex block order, but without elimination?



From giac, call gbasis with the list of polynomial and the list of the first block indets. 

Meanwhile, Roman Pearce's website [5] is back online and I used those polynomial systems to compare the Sage interfaces libsingular, giacpy_sage and fgb_sage. I created a gist of the code [6].

Versions of libraries used:
• Sage 8.5.beta4
• libSingular-4.1.1
• giacpy_sage-0.6.6, giac-1.4.9.45.p4

The timings on Q do not reflect the timings of native giac, there must be some misconfiguration or something wrong with the sage interface. Higher timings with 16 threads instead of 4 for a modular algorithm is suspicious.

Markus Wageringel

unread,
Dec 7, 2018, 1:53:18 AM12/7/18
to sage-devel
Am Donnerstag, 6. Dezember 2018 11:41:23 UTC+1 schrieb Bill Hart:
All the other systems are using modular methods here, so Roman use the modstd library (the command is modStd) in Singular to get those timings. Indeed cyclic7 over Q takes about 20s on my laptop in Singular using this approach. 

Thanks for the hint. With that, I get the following times over ℚ.

ℚ             | singular (4)  singular (16)
      cyclic7 |        28.79          21.27
      cyclic8 |      1109.03         623.98
     katsura9 |        28.07          25.14
    katsura10 |       329.86         172.15
        alea6 |      3053.70        2549.42
        eco12 |      4879.15        1888.11
     jason210 |       122.07         106.33
        noon9 |      2339.98        1502.74


Am Donnerstag, 6. Dezember 2018 20:00:52 UTC+1 schrieb parisse:
From giac, call gbasis with the list of polynomial and the list of the first block indets. 

That worked. Thanks.
 

The timings on Q do not reflect the timings of native giac, there must be some misconfiguration or something wrong with the sage interface. Higher timings with 16 threads instead of 4 for a modular algorithm is suspicious.

While there will be some overhead due to the conversion from and to Sage, it is the same in both cases. In fact, I observe similar times with the native Giac that is installed into the Sage environment, when applied to the Giac input file cyclic8 you linked above [1]: 0m27.309s using 4 threads and 1m43.493s using 16. Indeed, CPU usage rarely surpasses 400% during the computation. I am not sure what could be wrong here. Perhaps it is something related to the patches of the Giac in Sage [2]?

parisse

unread,
Dec 7, 2018, 2:10:57 AM12/7/18
to sage-devel


Le vendredi 7 décembre 2018 07:53:18 UTC+1, Markus Wageringel a écrit :

While there will be some overhead due to the conversion from and to Sage, it is the same in both cases. In fact, I observe similar times with the native Giac that is installed into the Sage environment, when applied to the Giac input file cyclic8 you linked above [1]: 0m27.309s using 4 threads and 1m43.493s using 16. Indeed, CPU usage rarely surpasses 400% during the computation. I am not sure what could be wrong here. Perhaps it is something related to the patches of the Giac in Sage [2]?


I do not have access to a server with that many CPU. In fact, I only have access to my Mac with 2 threads. It is therefore possible that something strange happens with 4 or 16 threads that I don't see with 2 threads. If you can reproduce that with giac native without patch, then that's probably what happens. But then why does it not scale to 4 or 16 threads? The modular algorithm is very simple: compute the gbasis for n primes (this is done in parallel), and chinese remainder after that (not parallelized). For most of the examples in the benchmarks here, chinese remaindering is a small percentage of the time required. This corresponds to the timings of Roman Pearce http://www.cecm.sfu.ca/~rpearcea/mgb.html (4 threads).
Perhaps a memory problem? I mean, malloc/new in parallel may spend a lot of time waiting for locks.

Dima Pasechnik

unread,
Dec 7, 2018, 6:15:56 AM12/7/18
to sage-devel
On Fri, Dec 7, 2018 at 7:10 AM parisse <bernard...@ujf-grenoble.fr> wrote:
>
>
>
> Le vendredi 7 décembre 2018 07:53:18 UTC+1, Markus Wageringel a écrit :
>>
>>
>> While there will be some overhead due to the conversion from and to Sage, it is the same in both cases. In fact, I observe similar times with the native Giac that is installed into the Sage environment, when applied to the Giac input file cyclic8 you linked above [1]: 0m27.309s using 4 threads and 1m43.493s using 16. Indeed, CPU usage rarely surpasses 400% during the computation. I am not sure what could be wrong here. Perhaps it is something related to the patches of the Giac in Sage [2]?
>>
>
> I do not have access to a server with that many CPU. In fact, I only have access to my Mac with 2 threads.

you can certainly get free cloud resources from Google, to spin out
Linux (and not only) VMs with many cores, they have a faculty
programme like this.
I've been using it since Sept.
https://cloud.google.com/edu/


> It is therefore possible that something strange happens with 4 or 16 threads that I don't see with 2 threads. If you can reproduce that with giac native without patch, then that's probably what happens. But then why does it not scale to 4 or 16 threads? The modular algorithm is very simple: compute the gbasis for n primes (this is done in parallel), and chinese remainder after that (not parallelized). For most of the examples in the benchmarks here, chinese remaindering is a small percentage of the time required. This corresponds to the timings of Roman Pearce http://www.cecm.sfu.ca/~rpearcea/mgb.html (4 threads).
> Perhaps a memory problem? I mean, malloc/new in parallel may spend a lot of time waiting for locks.
>

Bill Hart

unread,
Dec 7, 2018, 7:07:45 AM12/7/18
to sage-devel


On Friday, 7 December 2018 07:53:18 UTC+1, Markus Wageringel wrote:

<SNIP>

While there will be some overhead due to the conversion from and to Sage, it is the same in both cases. In fact, I observe similar times with the native Giac that is installed into the Sage environment, when applied to the Giac input file cyclic8 you linked above [1]: 0m27.309s using 4 threads and 1m43.493s using 16. Indeed, CPU usage rarely surpasses 400% during the computation. I am not sure what could be wrong here. Perhaps it is something related to the patches of the Giac in Sage [2]?

How many physical cores do you have on the machine (not logical cores), and how many CPU sockets and what is the cache structure? (I assume it is at least 16 physical cores, but I'm asking more because this sort of thing often happens because of shared caches.)

Markus Wageringel

unread,
Dec 7, 2018, 3:41:41 PM12/7/18
to sage-devel
Am Freitag, 7. Dezember 2018 13:07:45 UTC+1 schrieb Bill Hart:
How many physical cores do you have on the machine (not logical cores), and how many CPU sockets and what is the cache structure? (I assume it is at least 16 physical cores, but I'm asking more because this sort of thing often happens because of shared caches.)

It has 2 CPU sockets, each having 8 physical cores with 2 threads each. As for the cache:

L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K

Does this answer the question? Also, here is a specification: https://ark.intel.com/products/64584/Intel-Xeon-Processor-E5-2660-20M-Cache-2-20-GHz-8-00-GT-s-Intel-QPI-

parisse

unread,
Dec 8, 2018, 12:03:47 PM12/8/18
to sage-devel


Le vendredi 7 décembre 2018 12:15:56 UTC+1, Dima Pasechnik a écrit :
you can certainly get free cloud resources from Google, to spin out
Linux (and not only) VMs with many cores, they have a faculty
programme like this.
I've been using it since Sept.
https://cloud.google.com/edu/


I don't think I'm eligible and even if I was, I don't want to depend from google or any company for something like that (the risk of IP problems is much too high and anyway I don't like the idea that private company should replace public support for research). Either I can have ssh access to some university linux server somewhere or this will be delayed until I buy myself a laptop with more than 2 (reals) threads.

Dima Pasechnik

unread,
Dec 8, 2018, 5:44:32 PM12/8/18
to sage-devel
On Sat, Dec 8, 2018 at 5:03 PM parisse <bernard...@ujf-grenoble.fr> wrote:
> Le vendredi 7 décembre 2018 12:15:56 UTC+1, Dima Pasechnik a écrit :
>>
>> you can certainly get free cloud resources from Google, to spin out
>> Linux (and not only) VMs with many cores, they have a faculty
>> programme like this.
>> I've been using it since Sept.
>> https://cloud.google.com/edu/
>>
>
> I don't think I'm eligible

I think a MCF at a university in France would be as eligible as an
analogue of MCF
in a UK university, like me:
https://cloud.google.com/edu/?options=research-credits#options

> and even if I was, I don't want to depend from google or any company for something like that (the risk of IP problems is much too high

IP problems while working with open source software? Really?
(of course, companies that sell hardware and university system
administrators might be saying that cloud computing providers have
horns on their heads and make you sign off your soul with a
blood-filled pen...)

> and anyway I don't like the idea that private company should
replace public support for research). Either I can have ssh access to
some university linux server somewhere

The reality is that many universities already moved to renting servers
in datacenters, and this will become more and more prevalent. There
are universities in UK that do not have any servers at all, and run CS
education programs more or less fully in the cloud.
Just a couple of days ago I spoke to someone from U. of Portsmouth,
where every CS major students gets issued a linux VM at the beginning
of their study, and do most of their graded coursework there (at a
cost for the university for under 7 euro per student per year).

> or this will be delayed until I buy myself a laptop with more than 2 (reals) threads.

whereas you could run your tests on a 100-core VM, for free...

parisse

unread,
Dec 9, 2018, 2:05:14 AM12/9/18
to sage-devel


Le samedi 8 décembre 2018 23:44:32 UTC+1, Dima Pasechnik a écrit :
On Sat, Dec 8, 2018 at 5:03 PM parisse <bernard...@ujf-grenoble.fr> wrote:

>  and even if I was, I don't want to depend from google or any company for something like that (the risk of IP problems is much too high

IP problems while working with open source software? Really?

Yes, Giac is dual-licensed. For example the HP Prime calculator is using Giac as CAS. And I'm almost certain that google will some day want to replace calculators in the education sector.

 > and anyway I don't like the idea that private company should
replace public support for research). Either I can have ssh access to
some university linux server somewhere

The reality is that many universities already moved to renting servers
in datacenters, and this will become more and more prevalent. There
are universities in UK that do not have any servers at all, and run CS
education programs more or less fully in the cloud.
Just a couple of days ago I spoke to someone from U. of Portsmouth,
where every CS major students gets issued a linux VM at the beginning
of their study, and do most of their graded coursework there (at a
cost for the university for under 7 euro per student per year).


If you pay for a service, that's different. I don't like the idea to be dependent on the net and big datacenter but it's certainly possible to work on partially commercial projects without fearing anything on IP (but you must probably  check the contract to be sure).

> or this will be delayed until I buy myself a laptop with more than 2 (reals) threads.

whereas you could run your tests on a 100-core VM, for free...

So what? My current configuration is well adapted to Xcas and Geogebra user basis. I never saw big interest in the computer algebra and sage community in Giac : Opendreamkit has ignored Giac and spent part of its ressources to implement efficient polynomial algorithms that were already implemented in Giac, my Groebner basis code is not new code, it's there since more than a couple of years now. 
If I have an opportunity to test on a University server I'll take it, but I will not fight for that. If I don't have the opportunity, then I'll try on the next laptop I will buy when the current one will have to be replaced. There is no urgency.
Maybe the situation will change in the future, then I may reconsider my priorities.

Dima Pasechnik

unread,
Dec 9, 2018, 5:22:48 AM12/9/18
to sage-devel
On Sun, Dec 9, 2018 at 7:05 AM parisse <bernard...@ujf-grenoble.fr> wrote:
>
>
>
> Le samedi 8 décembre 2018 23:44:32 UTC+1, Dima Pasechnik a écrit :
>>
>> On Sat, Dec 8, 2018 at 5:03 PM parisse <bernard...@ujf-grenoble.fr> wrote:
>>
>> > and even if I was, I don't want to depend from google or any company for something like that (the risk of IP problems is much too high
>>
>> IP problems while working with open source software? Really?
>
>
> Yes, Giac is dual-licensed. For example the HP Prime calculator is using Giac as CAS. And I'm almost certain that google will some day want to replace calculators in the education sector.

So? There are many instances of (open-source) Giac already deployed on
Google servers, and perhaps servers of other commercial cloud
computing providers.

You not having or having access to such instance changes nothing in
terms of Giac's IP.


>
>> > and anyway I don't like the idea that private company should
>> replace public support for research). Either I can have ssh access to
>> some university linux server somewhere
>>
>> The reality is that many universities already moved to renting servers
>> in datacenters, and this will become more and more prevalent. There
>> are universities in UK that do not have any servers at all, and run CS
>> education programs more or less fully in the cloud.
>> Just a couple of days ago I spoke to someone from U. of Portsmouth,
>> where every CS major students gets issued a linux VM at the beginning
>> of their study, and do most of their graded coursework there (at a
>> cost for the university for under 7 euro per student per year).
>>
>
> If you pay for a service, that's different.

You are offered a free trial, which in most, if not all, aspects is
the same as the pay-for product, and you refuse it merely on the
grounds that you don't have to pay for it?

> I don't like the idea to be dependent on the net and big datacenter but it's certainly possible to work on partially commercial projects without fearing anything on IP (but you must probably check the contract to be sure).
>
>> > or this will be delayed until I buy myself a laptop with more than 2 (reals) threads.
>>
>> whereas you could run your tests on a 100-core VM, for free...
>
>
> So what? My current configuration is well adapted to Xcas and Geogebra user basis. I never saw big interest in the computer algebra and sage community in Giac

if you don't show a community how your system performs in the settings
of interest to the community, then naturally you won't get attention
in the community.

: Opendreamkit has ignored Giac and spent part of its ressources to
implement efficient polynomial algorithms that were already
implemented in Giac, my Groebner basis code is not new code, it's
there since more than a couple of years now.

While in principle it would have been good to have Giac on board,
OpenDreamKit is a consortium of like-minded people, otherwise it'd
have been a constant fight and no work done.
Giac project appears to ignore cloud computing possibilities and the
universally adopted across OpenDreamKit practice of using source
version control to facilitate collaboration on the code.

I cannot comment on why certain implementations did not use Giac code,
I am not involved in this work.

parisse

unread,
Dec 9, 2018, 8:42:33 AM12/9/18
to sage-devel
Efficient code does not depend on how you handle it (git, svn or tarballs or whatever). And I don't think different practices is the real reason why Giac was/is mostly ignored here.

After having done a few tests, I think I know why my code on Q is slower with more threads (if the number of threads exceeds say 2 or 3), it is related to the size of memory the various cores have to acces more than allocation locks, this size is proportionnal to the number of threads if each thread does a different modular computation, and this raises much more cache misses. Which means I must parallelize in each modular computation instead of doing different modulus in parallel.

Dima Pasechnik

unread,
Dec 9, 2018, 2:44:30 PM12/9/18
to sage-devel
On Sun, Dec 9, 2018 at 1:42 PM parisse <bernard...@ujf-grenoble.fr> wrote:
>
> Efficient code does not depend on how you handle it (git, svn or tarballs or whatever).

Efficiency of handling code does depend upon this; fixing a trivial
C++ issue in Giac takes 10 times longer (and takes 10 times more time
and effort from the community around Giac) than in a similar system
hosted, say, on github.


> And I don't think different practices is the real reason why Giac was/is mostly ignored here.

When ODK proposal was being written, we still tried getting Giac to
stop changing distribution tarballs without bumping up version
numbers...


>
> After having done a few tests, I think I know why my code on Q is slower with more threads (if the number of threads exceeds say 2 or 3), it is related to the size of memory the various cores have to acces more than allocation locks, this size is proportionnal to the number of threads if each thread does a different modular computation, and this raises much more cache misses. Which means I must parallelize in each modular computation instead of doing different modulus in parallel.
>

parisse

unread,
Dec 10, 2018, 1:07:29 AM12/10/18
to sage-devel


Le dimanche 9 décembre 2018 20:44:30 UTC+1, Dima Pasechnik a écrit :
On Sun, Dec 9, 2018 at 1:42 PM parisse <bernard...@ujf-grenoble.fr> wrote:
>
> Efficient code does not depend on how you handle it (git, svn or tarballs or whatever).

Efficiency of handling code does depend upon this; fixing a trivial
C++ issue in Giac takes 10 times longer (and takes 10 times more time
and effort from the community around Giac) than in a similar system
hosted, say, on github.


Let me remind you that Giac source code is available on Geogebra SVN. Obviously, I have better things to do than duplicating repositories, especially if the handling software is not the same that the software I'm used too.
If you want to fix an issue in my code, the fastest way is to send a bug report on the Xcas forum. Then I usually fix the bug in a couple of days. I don't understand how it could take 10 times more effort from the community. Actually, I think you just want to coerce me to follow your rules. But well, I have the opposite view than you, the way sage handles bug reports and so would take me 10 * more time than the way I work.


> And I don't think different practices is the real reason why Giac was/is mostly ignored here.

When ODK proposal was being written, we still tried getting Giac to
stop changing distribution tarballs without bumping up version
numbers...



Pfff! My guess is that people were not really interested in giac (and did not really check it) because they have access to magma when it comes to large computations where giac has more efficient code than singular.
Ok, I'll stop here.

Fortunately, there are constructive people in the community, I'd like to thank Samuel Lelièvre and William Stein who managed to open me an account on a multi-CPU server.

Dima Pasechnik

unread,
Dec 10, 2018, 4:44:51 AM12/10/18
to sage-...@googlegroups.com
On Mon, 10 Dec 2018 at 06:07, parisse <bernard...@ujf-grenoble.fr> wrote:


Le dimanche 9 décembre 2018 20:44:30 UTC+1, Dima Pasechnik a écrit :
On Sun, Dec 9, 2018 at 1:42 PM parisse <bernard...@ujf-grenoble.fr> wrote:
>
> Efficient code does not depend on how you handle it (git, svn or tarballs or whatever).

Efficiency of handling code does depend upon this; fixing a trivial
C++ issue in Giac takes 10 times longer (and takes 10 times more time
and effort from the community around Giac) than in a similar system
hosted, say, on github.


Let me remind you that Giac source code is available on Geogebra SVN.

this is not mentioned on Giac pages, and if I ever heard about it I blissfully forgot. I did check a couple of times.
I take it that you either do not use svn at all or prefer outsiders not to be aware of it. 
What is the relation between giac source tarballs and that svn?

Obviously, I have better things to do than duplicating repositories, especially if the handling software is not the same that the software I'm used too.
If you want to fix an issue in my code, the fastest way is to send a bug report on the Xcas forum. Then I usually fix the bug in a couple of days. I don't understand how it could take 10 times more effort from the community.


I estimated the time it took to get patches needed for Giac to support clang 6.0.1.
Several times I had to download Giac tarballs, tried to build etc, 
only to discover that not all the things I reported were fixed, so I had to produce new patches, report them again and again. Took hours, in total, spread over several days.
Whereas with VCS in place and used right it should have taken minutes, not hours.

Actually, I think you just want to coerce me to follow your rules. But well, I have the opposite view than you, the way sage handles bug reports and so would take me 10 * more time than the way I work.

no, I merely do not want to waste my time on you being unable to efficiently utilise my work.
And I am sure I am not alone in this sentiment, it must be common among people trying to help you to mak Giac better.
Thanks goodness you do not require changes to be mailed to you on floppy disks :-)




> And I don't think different practices is the real reason why Giac was/is mostly ignored here.

When ODK proposal was being written, we still tried getting Giac to
stop changing distribution tarballs without bumping up version
numbers...



Pfff! My guess is that people were not really interested in giac (and did not really check it) because they have access to magma when it comes to large computations where giac has more efficient code than singular.
Ok, I'll stop here.

no, Magma is not common, especially outside the US. I never published anything that used Magma.
(no, in fact once I did, my coauthors used error-correcting codes produced by Magma).




Fortunately, there are constructive people in the community, I'd like to thank Samuel Lelièvre and William Stein who managed to open me an account on a multi-CPU server.

I could have offered to open you an account on a multicore server, but it would actually be a commercial cloud server, not a “university” server (perhaps that’s what you actually got from Samuel and William :-)). I cannot give anyone access to Oxford university servers, without lengthly paperwork, and in the end it would be a 10-year old Linux cluster with outdated software and hardware...

parisse

unread,
Dec 10, 2018, 9:20:03 AM12/10/18
to sage-devel
The giac tarball is self-contained for compiling on gnu systems. The stable version corresponding to the latest debian packages is giac_stable.tgz
The latest unstable version is giac_unstable.tgz
The geogebra svn has the same source files as the unstable tarball, but it's not suitable for compile from scratch because they are using different tools to compile (depending on targets, win/mac/linux desktop, jni/javascript/android etc.).

I made a few changes to the way parallelization is called, and got some progress. Now cyclic9 on Q takes a little more than 4h with 1 thread, 1h41 real time/4h40 CPU with 6 threads (probably not far from optimal).

parisse

unread,
Dec 11, 2018, 4:46:30 AM12/11/18
to sage-devel
Got cyclic9 on Q with 16 threads in 45 minutes real time (about 6h and 20 minutes CPU time).

parisse

unread,
Dec 14, 2018, 1:48:50 PM12/14/18
to sage-devel
Giac source code has been updated, with the following (much faster) timings for gbasis computation on Q

server 32 processors Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, 64G of RAM

16 threads [CPU time, real time], 4 threads [CPU, real], 1 thread
cyclic8 : [43.37,11.25], [31.82,12.15], 26.12 (48 primes)
cyclic9: [19728,1821], [16038,4760], 14625 (377 primes)
katsura12: [536,116], [417,153], 397 (66 primes)
alea6: [116.19,69.66], [105,75], 95.2 (397 primes)
eco12: [67.38,18.00], [50.5,20.7], 41 (18 primes)
jason210: [96.51,20.87], [49,7, 21.4], 44 (9 primes)
noon9: [375.41,124.43], [329,136], 189 (6 primes)

The number of primes explains some timings like jason210: after a first slow run with 1 prime, giac parallelizes the computation on 4 or 16 primes, for jason210, 9 primes are computed with 4 threads but 17 with 16 threads.


Bill Hart

unread,
Dec 15, 2018, 10:25:05 AM12/15/18
to sage-devel
I don't know for sure, but it looks like it has a shared L3 cache across many cores in each socket. I wouldn't be terribly surprised if this was relevant. Of course, only an extended effort with many timings could confirm this for sure. 
 

Bill Hart

unread,
Dec 15, 2018, 11:10:10 AM12/15/18
to sage-devel


On Sunday, 9 December 2018 11:22:48 UTC+1, Dima Pasechnik wrote:

<SNIP>

I cannot comment on why certain implementations did not use Giac code,
I am not involved in this work.

I am involved in that, but I believe there may be some misconceptions here. The implementations we are doing for ODK are not for Groebner bases. However, the group I work for, who develop the system our code will contribute to, has been involved in GB computation for something like thirty years. Our work on GB's, which is not funded by ODK, is ongoing work for that entire period. 

As for polynomial arithmetic for applications other than GBs, that work builds on two separate projects that have been ongoing in one form or another since late 2006 in the one case and for some decades in the other case. Integration with those code bases is desirable for a number of reasons.

Bernard, if you have the impression that ODK went out looking for a polynomial library to use then you have the wrong impression. ODK is not a CAS, it's a toolbox. Giac could be part of that toolbox just as easily as any of the other tools developed in Europe, so long as it was designed to integrate with that toolbox. ODK is ultimately about exposing tools to a VRE and making those tools usable for researchers building their own VRE's.

Here is not the place for me to spell out the specific goals we have for the polynomial arithmetic we are implementing for ODK, but usability, flexibility, maintainability, testing, documentation, performance and exposure to a VRE all feature heavily in the feature set we've targeted. Note that the code we are developing is part of the HPC subproject of ODK. You can find information about the whole project, including the explicit subtasks we are contributing to, by digging into the OpenDreamKit website and GitHub repositories. Absolutely *everything* is fully public and accessible, from the initial grant proposal (even the drafts, I believe), through our meetings, goals, funding, performance indicators, through our code and dissemination.

Bill Hart

unread,
Dec 15, 2018, 11:54:00 AM12/15/18
to sage-devel

On Saturday, 15 December 2018 17:10:10 UTC+1, Bill Hart wrote:


On Sunday, 9 December 2018 11:22:48 UTC+1, Dima Pasechnik wrote:

<SNIP>

I cannot comment on why certain implementations did not use Giac code,
I am not involved in this work.

I am involved in that, 

I should probably rather say, I was paid by ODK to write the original implementation of our code, but we now have a different person paid by ODK to (exceedingly capably) do that work. I now serve only an advisory role. Of course I maintain an active interest; my day-to-day job is something else entirely.

parisse

unread,
Dec 15, 2018, 12:19:53 PM12/15/18
to sage-devel
Bill, my feeling is that part of ODK money was used to improve multivariate polynomial arithmetic implementations precisely in a domain where Giac behaves well (and maybe I should emphasize that unlike almost all other CAS, Giac is a library, i.e. is interoperable with any software that can interact with a C++ library). Despite that, ODK ignores Giac.
Well, I'm not sure it's the right place to discuss that, and anyway past is the past.
Now that I made the effort to fine tune my gbasis code, it behaves very well on Q on multi-CPUs architectures. We'll see if there is more interest or not.

Bill Hart

unread,
Dec 15, 2018, 2:57:02 PM12/15/18
to sage-devel


On Saturday, 15 December 2018 18:19:53 UTC+1, parisse wrote:
Bill, my feeling is that part of ODK money was used to improve multivariate polynomial arithmetic implementations precisely in a domain where Giac behaves well (and maybe I should emphasize that unlike almost all other CAS, Giac is a library, i.e. is interoperable with any software that can interact with a C++ library). Despite that, ODK ignores Giac.

I'm really puzzled by this. There is even overlap *within* ODK.

Our local site and another colleague of ours also contributed to parallel root clustering, a superoptimiser, a Jupyter interface, SIMD optimisation, a quadratic sieve and so on. I think Giac barely covers any of that. We aren't trying to do multivariate arithmetic; we are trying to improve and modernise the system that we have been developing for 30 years in the direction of HPC, and expose it to a VRE so that it can be a more useful tool in the "toolbox". Giac does not address that for us.

But even if we just focus on multivariate arithmetic (which we obviously need, just as you do), the new implementation we are working on seems to have different goals:

* lex, deglex, degrevlex (and soon, weighted) orderings in an arbitrary number of variables
* multiprecision exponents (to support new algorithms for sparse interpolation)
* near linear scaling of all major algorithms up to many cores with a highly optimised threaded memory manager
* fast specialised multivariate arithmetic and gcd over Z/pZ, Z and Q (and some initial work over finite fields and extensions insofar as it is necessary for the other)
* ideal reduction
* specialised cases for dense, quasi-sparse and sparse algorithms
* integration with Singular via Factory (the library of Singular that does multivariate arithmetic for applications other than GBs)
* exposure to a VRE
* clear, modularised code, documentation and tests

It's not clear to me how much of all that giac supports. And I am sure I've mentioned most of these goals, both publicly and privately. I apologise if I have not made this more clear in the past. It is certainly not our goal to just do what giac already does.

And even if giac did all that, it is one of many projects doing multivariate polynomial arithmetic in Europe. There's also Trip, Piranha, Factory, Pari/GP, Gap. I really don't think it is a valid argument that just because your CAS/library can do multivariate arithmetic that we should stop working on it!

Also, I don't believe we've ignored giac at all. I've sent dozens of emails to you personally. I've mentioned your work to many of my colleagues, and when we benchmark our stuff, I have benchmarked giac, and been in contact with you personally when we do. I vaguely recall inviting you for a visit once.

Surely, I am missing your point here. 

Well, I'm not sure it's the right place to discuss that, and anyway past is the past.
Now that I made the effort to fine tune my gbasis code, it behaves very well on Q on multi-CPUs architectures. We'll see if there is more interest or not.

Great. I will mention this to my colleagues next door, who are working on GB's. I am sure they will be very interested to hear about and compare the progress you have made.

parisse

unread,
Dec 15, 2018, 4:22:09 PM12/15/18
to sage-devel


Le samedi 15 décembre 2018 20:57:02 UTC+1, Bill Hart a écrit :


And even if giac did all that, it is one of many projects doing multivariate polynomial arithmetic in Europe. There's also Trip, Piranha, Factory, Pari/GP, Gap. I really don't think it is a valid argument that just because your CAS/library can do multivariate arithmetic that we should stop working on it!


I never said that. The point is that many people seems to ignore Giac. This is how I interpret the thread subject here or the fact that I was never contacted when the ODK project was launched. But perhaps it's better to be ignored in the end, I'm free to implement whatever I want.

Bill Hart

unread,
Dec 15, 2018, 5:06:02 PM12/15/18
to sage-devel
We were not contacted either. You can actually find the point where I heard about OpenDreamKit on sage-devel. I brought it to the attention of my bosses, and they contacted the project leader to get involved!

But anyway, I better understand what you are saying now. I thought I must have been misunderstanding something. 

David Bernal

unread,
Aug 1, 2019, 3:43:31 PM8/1/19
to sage-devel
Hi Markus,
can you explain to us how did you modify this in Singular? I tried running your snippet with Cyclic7 and cancelled it after an hour of computation.

Markus Wageringel

unread,
Aug 1, 2019, 5:32:23 PM8/1/19
to sage-devel
Hi David,

my bad, I had forgotten to update the gist. I have just updated it, see [1]. There I used Singular (not libSingular), which usually should not make much of a difference for this kind of computation. However, note that there are some concerning speed issues with Singular/libSingular in Sage which I mentioned at [2] and do not know how to solve. Therefore, it may be worth comparing with libSingular as well. The equivalent libSingular call would be something like this:

from sage.libs.singular.function_factory import ff
modStd
= ff.modstd__lib.modStd
ff
.setcores(4)
I
= sage.rings.ideal.Cyclic(PolynomialRing(QQ, 'x', 7))
gb
= modStd(I)

whereas with Singular:

singular.load('modstd')
singular
.setcores(4)
I
= sage.rings.ideal.Cyclic(PolynomialRing(QQ, 'x', 7))
J
= singular.modStd(I).sage()

David Bernal

unread,
Aug 3, 2019, 9:45:14 PM8/3/19
to sage-devel
Hi Markus,
thank you for your prompt reply! It worked. Here might not be the right place to ask this, but I have 2 doubts regarding the use of Groebner basis in Sage. I'm trying to compute the variety of an ideal (or at least part of it) and my thought was that if I computed the Groebner basis of this ideal, then generating the ideal from the Groebner basis and finding its variety would be equivalent and computationally cheaper. In particular, since I'm not interested in the whole variety, I was planning on "solving" the system of equations.
I have posted this question here and here, in case you have any opinions in this procedure.
Thanks again,
David
Reply all
Reply to author
Forward
0 new messages