Hi Travis,
I'm putting together a GSoC proposal for the graded commutative and exterior algebras project. I've already worked in this part of the SageMath codebase before -> I fixed a bug in the exterior algebra multiplication logic in PR #40741.
I was reading through Issue #34437 and the Peifer notes on the F4 algorithm. It looks like the main bottleneck is the memory overhead from copying dense matrices during row reduction, and I noticed you started testing a sparse approach using dictionaries of leading supports. Coming from a strong C++ competitive programming background, I'm used to improving performance of complex data structures. I'd love to bring that experience to optimizing these Cython and linear algebra routines.
Before I outline my timeline, I just wanted to clarify the scope regarding the 175-hour versus 350-hour variants. Since the project description mentions both the Gröbner basis optimization (#34437) and the native GCA implementation as goals, how would you recommend balancing them? For a 350-hour proposal, should I aim to tackle both fully, including building the interfaces to external libraries like Macaulay2?
Regards
Hetarth Jodha
https://github.com/Hapsa21
Hi Travis,
Thanks for the honest feedback. You are right, the math required for the F4 algorithm is too advanced for me right now. I am glad you told me before I spent too much time on it.
Based on your suggestion, I would like to change my project topic to one of the other two ideas you mentioned. I think these fit my current skills perfectly:
Efficient sparse linear algebra computations: I can look into optimizing the matrix operations using compressed formats like CSR (Compressed Sparse Row) or CSC, and writing better algorithms to speed up sparse matrix multiplications.
Reducing memory overhead for dense matrices: I can explore ways to manage memory better, such as using memory pooling to reduce allocation overhead and fragmentation, or optimizing in-place operations to improve cache locality.
Could you please point me to the specific C++ files, modules, or GitHub issues related to these two problems? I want to read the current code to understand the exact bottlenecks before I write my new proposal.
Regards
Hetarth
Thanks for the honest feedback. You are right, the math required for the F4 algorithm is too advanced for me right now. I am glad you told me before I spent too much time on it.
Based on your suggestion, I would like to change my project topic to one of the other two ideas you mentioned. I think these fit my current skills perfectly:
Efficient sparse linear algebra computations: I can look into optimizing the matrix operations using compressed formats like CSR (Compressed Sparse Row) or CSC, and writing better algorithms to speed up sparse matrix multiplications.
Reducing memory overhead for dense matrices: I can explore ways to manage memory better, such as using memory pooling to reduce allocation overhead and fragmentation, or optimizing in-place operations to improve cache locality.
Could you please point me to the specific C++ files, modules, or GitHub issues related to these two problems? I want to read the current code to understand the exact bottlenecks before I write my new proposal.