On Tue, 2021-03-23 at 17:16 -0700, Volker Braun wrote:
> I don't understand what the big pain point is, if Steve doesn't like the
> limit it he can just pick a different value no?
You can, but "make ptestlong" no longer works. And the correct number
to use is not obvious.
The "optional - memlimit" tests get run whenever there's any limit
set... not only when there's an _appropriate_ limit set. The one
doctest that needs a memory limit tries to construct a matrix with 2^31
entries in GF(2), so you need to guess a limit that's high enough not
to crash sage (> 3300 MB) but low enough that you can't allocate a that
matrix. Both of those numbers (the lower and upper limits), which are
already pretty close together, change occasionally as libraries and
toolchains do.
>
> We do need some way of keeping the memory usage in check, people naturally
> want to show off largeish computations and we run the risk to require a
> beefy developer machine to be able to develop for Sage.
>
I haven't done the requisite git archaeology, but I don't think this is
the purpose of --memlimit. It's there because there's one doctest for
OOM behavior, and you can't can't count on it running out of memory
without a limit being set.
In the past, whenever we've hit the limit, we've just increased the
limit.
I agree that we should try to keep the test suite's memory usage down,
but I think the CI is a better place to do it:
* the amount of RAM that gets used is dependent upon the system
running the tests
* we can use the standard "ulimit -v" there
* In the CI, we don't have to guess at a limit that's small enough
to cause the allocation failures that are being doctested, because
we're not trying to cause allocation failures
* breakage will only affect developers and can be quickly fixed
(ideally before being merged)