Mutliprocessing for Matrix Computations?

137 views
Skip to first unread message

Michael Jung

unread,
May 29, 2020, 5:46:55 PM5/29/20
to sage-devel
Dear Sage Developers,
is there a mutliprocessing support available for computations with matrices of large dimensions? Especially with respect to the division-free algorithm?

Best wishes
Michael

Michael Jung

unread,
May 29, 2020, 5:55:14 PM5/29/20
to sage-devel
Sorry, I meant: "especially for the division-free algorithm of the determinant."

Dima Pasechnik

unread,
May 29, 2020, 6:44:19 PM5/29/20
to sage-devel
What kind of entries do you have in your matrices?
Linbox matrices are quite efficient.
(see https://linalg.org/)


>>
>> Best wishes
>> Michael
>
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/3a39bc0e-9b52-4632-90e4-87504d3a497b%40googlegroups.com.

Michael Jung

unread,
May 30, 2020, 3:05:28 AM5/30/20
to sage-devel
Thanks for your respond. The entries are elements of the mixed form algebra (https://doc.sagemath.org/html/en/reference/manifolds/sage/manifolds/differentiable/mixed_form_algebra.html). Whose multiplications are already relatively slow.

Dima Pasechnik

unread,
May 30, 2020, 5:37:02 AM5/30/20
to sage-devel
On Sat, May 30, 2020 at 8:05 AM Michael Jung <mic...@uni-potsdam.de> wrote:
>
> Thanks for your respond. The entries are elements of the mixed form algebra (https://doc.sagemath.org/html/en/reference/manifolds/sage/manifolds/differentiable/mixed_form_algebra.html). Whose multiplications are already relatively slow.

Then I imagine it delegates to a very basic Sage code to do matrix
operations - as SR is involved, I imagine.
AFAIK there are no parallelisation options implemented.



>
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/e943d63a-f2a4-484f-97f4-7f5249703433%40googlegroups.com.

Michael Jung

unread,
May 30, 2020, 10:37:43 AM5/30/20
to sage-devel
Mh. Okay. Do you have an idea how to improve the computation, e.g. by using multiple cores?


Am Samstag, 30. Mai 2020 11:37:02 UTC+2 schrieb Dima Pasechnik:
On Sat, May 30, 2020 at 8:05 AM Michael Jung <mic...@uni-potsdam.de> wrote:
>
> Thanks for your respond. The entries are elements of the mixed form algebra (https://doc.sagemath.org/html/en/reference/manifolds/sage/manifolds/differentiable/mixed_form_algebra.html). Whose multiplications are already relatively slow.

Then I imagine it delegates to a very basic Sage code to do matrix
operations - as SR is involved, I imagine.
AFAIK there are no parallelisation options implemented.



>
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-...@googlegroups.com.

Nils Bruin

unread,
May 30, 2020, 4:33:57 PM5/30/20
to sage-devel
On Saturday, May 30, 2020 at 7:37:43 AM UTC-7, Michael Jung wrote:
Mh. Okay. Do you have an idea how to improve the computation, e.g. by using multiple cores?

A standard trick is to take a "multimodular" approach: for integer matrices this boils down to computing the answer modulo a whole bunch of primes and then use the Chinese Remainder Theorem to reconstruct the correct answer. The tricky bit is proving bounds that allow you to conclude that you have constructed the right matrix.

For polynomial matrices, this would mean evaluating the matrices at a whole bunch of variable values and then interpolating the answer.

In your case, this could be extra fun with the non-commutative parts of the algebra, but I'd expect that you'll either get acceptable performance by just limiting your specializations to taking fibers of your sheaf, or that the answer you're trying to construct is so horrible that it's beyond the range of objects expressible in your chosed representation.

Michael Jung

unread,
May 31, 2020, 3:41:52 PM5/31/20
to sage-devel
Thanks for your reply. Actually, I consider a commutative sub-algebra here. What do you mean by "taking fibers of [my] sheaf"?

I thought that it would be a nice idea to split all operations necessary for the determinant between different CPU cores. What about that?

Nils Bruin

unread,
May 31, 2020, 4:51:27 PM5/31/20
to sage-devel
On Sunday, May 31, 2020 at 12:41:52 PM UTC-7, Michael Jung wrote:
Thanks for your reply. Actually, I consider a commutative sub-algebra here. What do you mean by "taking fibers of [my] sheaf"?

Specialize to the exterior product algebra of the cotangent space at a point. So, at that point you get a vector in the vector space spanned by elements of the form dz_1 wedge ... wedge dz_r, with coefficients in your base field (perhaps better to take exactly representable field elements; otherwise you get additional numerical problems on top)

You can do the linear algebra operations you want to do there (say, take a determinant or so -- if your vector space admits that operation).

Once you have that at a bunch of points, you can interpolate. To get proven results you'd need some a priori finite dimensional vector space of sections (e.g. a degree bound on the polynomial you're trying to interpolate) and a guarantee that your interpolation points are well-distributed and sufficiently plentiful, so that interpolation gives a unique result.

Michael Jung

unread,
May 31, 2020, 5:52:21 PM5/31/20
to sage-devel
If I understand this correctly, I'd say this is already the approach of how differential forms are implemented in the SageManifolds package. A differential form is seen as an element of the module over the scalar fields which is, roughly speaking, generated by the germs on parallelizable pieces. Thus, on a parallelizable manifold, this is more or less a computation on the symbolic ring. So I think, the problem here is the high complexity which scales horrorbly with increasing dimensions, even if the manifold is parallelizable. Say, the manifold has dimension n. Then, the mixed differential forms are represented by (n+1) homogeneous components which themselves are elements of higher dimensional (free) modules, of dimension nCr(n,k) to be exact. Taking the wedge product just once already involves a relatively high number of operations. On top, the division-free computation of the determinant scales quite badly, too.

I already replaced zero-involving computations by simple checks. However, this is not enough.

Michael Jung

unread,
May 31, 2020, 5:58:34 PM5/31/20
to sage-devel
I should just mention here, that a CPU parallelization on the level of components already takes place. However, watching the CPU usage one can see that only single cores are demanded. I am not certain about the reason.

Nils Bruin

unread,
May 31, 2020, 7:06:44 PM5/31/20
to sage-devel
On Sunday, May 31, 2020 at 2:52:21 PM UTC-7, Michael Jung wrote:
If I understand this correctly, I'd say this is already the approach of how differential forms are implemented in the SageManifolds package. A differential form is seen as an element of the module over the scalar fields which is, roughly speaking, generated by the germs on parallelizable pieces.

I don't know what a parallellizable piece or a parallellizable manifold is, but the key for interpolation is not to compute with the germs but to compute with lots of values. (i.e., don't just localize, but compute in the residue ring). If you do it right, your complicated linear algebra computation now takes place over a field where coefficient swell is orders of magnitude less (and what's more, if it's a field where that's still an issue, you can use the trick again)

As an example, if you have a 2x2 matrix with polynomials in two variables x,y, of degree (n,m). Then you know that the determinant is a polynomial of degree (2n,2m). So, if you now compute the determinant for values of (x,y) in a 2n+1,2m+1 grid, then you can interpolate the determinant back. This can be considerably cheaper than computing the determinant directly over the polynomial ring.



Michael Jung

unread,
Jun 1, 2020, 4:55:15 AM6/1/20
to sage-devel
Ah, interesting. Do you have some literature/references for me?

Nils Bruin

unread,
Jun 1, 2020, 8:13:31 PM6/1/20
to sage-devel
On Monday, June 1, 2020 at 1:55:15 AM UTC-7, Michael Jung wrote:
Ah, interesting. Do you have some literature/references for me?

To me this is just folklore, but it may well be someone has written something about it. Multimodular is mentioned at least once in the sage documentation:


and there are various multimodular algorithms implemented. That's certainly the circle of ideas that should help.

Marc Mezzarobba

unread,
Jun 2, 2020, 10:55:50 AM6/2/20
to sage-...@googlegroups.com
Michael Jung wrote:
> Ah, interesting. Do you have some literature/references for me?

Among the standard references, Chapter 5 of *Modern Computer Algebra* by
von zur Gathen & Gerhard discusses the basic evaluation-interpolation
algorithm for computing the determinant of polynomial matrices in some
detail. For more sophisticated methods and recent research on fast exact
linear algebra (including both complexity questions and practical speed
issues), look at the papers of the people involved in Linbox.

--
Marc

Reply all
Reply to author
Forward
0 new messages