GSoC 2026- Coordinate graded commutative and exterior algebras

27 views
Skip to first unread message

Hapsa

unread,
Mar 8, 2026, 11:44:27 PM (2 days ago) Mar 8
to sage-gsoc

Hi Travis,

I'm putting together a GSoC proposal for the graded commutative and exterior algebras project. I've already worked in this part of the SageMath codebase before -> I fixed a bug in the exterior algebra multiplication logic in PR #40741.

I was reading through Issue #34437 and the Peifer notes on the F4 algorithm. It looks like the main bottleneck is the memory overhead from copying dense matrices during row reduction, and I noticed you started testing a sparse approach using dictionaries of leading supports. Coming from a strong C++ competitive programming background, I'm used to improving performance of complex data structures. I'd love to bring that experience to optimizing these Cython and linear algebra routines.

Before I outline my timeline, I just wanted to clarify the scope regarding the 175-hour versus 350-hour variants. Since the project description mentions both the Gröbner basis optimization (#34437) and the native GCA implementation as goals, how would you recommend balancing them? For a 350-hour proposal, should I aim to tackle both fully, including building the interfaces to external libraries like Macaulay2?

Regards

Hetarth Jodha
https://github.com/Hapsa21

tcscrims

unread,
Mar 9, 2026, 7:38:00 PM (2 days ago) Mar 9
to sage-gsoc
Hi Hetarth,

It’s not as simple as simply reducing memory overhead from dense matrix computations (which is a separate independent issue that should be addressed at some point…), nor is it as simple as implementing efficient sparse linear algebra computations (which is something else that SageMath should improve upon). There is a lot of additional structure that is usually present in the matrices that can be taken advantage of for performing the reduction step in F4. Be aware that this is not a data structures problem, and such experience (and knowing general optimization techniques) I do not expect to be so applicable to this project.

Please remember that it is your project proposal, and what you want to do within that time period (either 175 or 350) is up to you. The ideas there and their length is what I was guessing, but do keep in mind that those are just ideas to help you draft your proposal. I can provide feedback on specific proposals, but you can design the project as you wish.

Best,
Travis

Hapsa

unread,
Mar 9, 2026, 8:39:57 PM (2 days ago) Mar 9
to sage-gsoc

Hi Travis,

Thanks for the honest feedback. You are right, the math required for the F4 algorithm is too advanced for me right now. I am glad you told me before I spent too much time on it.

Based on your suggestion, I would like to change my project topic to one of the other two ideas you mentioned. I think these fit my current skills perfectly:

  1. Efficient sparse linear algebra computations: I can look into optimizing the matrix operations using compressed formats like CSR (Compressed Sparse Row) or CSC, and writing better algorithms to speed up sparse matrix multiplications.

  2. Reducing memory overhead for dense matrices: I can explore ways to manage memory better, such as using memory pooling to reduce allocation overhead and fragmentation, or optimizing in-place operations to improve cache locality.

Could you please point me to the specific C++ files, modules, or GitHub issues related to these two problems? I want to read the current code to understand the exact bottlenecks before I write my new proposal.

Regards

Hetarth

tcscrims

unread,
Mar 9, 2026, 8:47:10 PM (2 days ago) Mar 9
to sage-gsoc
Dear Hetarth,

Thanks for the honest feedback. You are right, the math required for the F4 algorithm is too advanced for me right now. I am glad you told me before I spent too much time on it.

 Thank you for your honesty here. It is very important that I (or generally, any mentor) can trust what you tell us.

Based on your suggestion, I would like to change my project topic to one of the other two ideas you mentioned. I think these fit my current skills perfectly:

  1. Efficient sparse linear algebra computations: I can look into optimizing the matrix operations using compressed formats like CSR (Compressed Sparse Row) or CSC, and writing better algorithms to speed up sparse matrix multiplications.

  2. Reducing memory overhead for dense matrices: I can explore ways to manage memory better, such as using memory pooling to reduce allocation overhead and fragmentation, or optimizing in-place operations to improve cache locality.

Could you please point me to the specific C++ files, modules, or GitHub issues related to these two problems? I want to read the current code to understand the exact bottlenecks before I write my new proposal.

I very strongly suggest that you look carefully at the code within SageMath and on our issue tracker before you make any future attempt to develop a proposal along these lines. A good proposal takes time examining the current situation and developing an appropriate solution that is also feasible.

Best,
Travis

Reply all
Reply to author
Forward
0 new messages