Turn off consistent randomness in doctests by default?

52 views
Skip to first unread message

Jonathan Kliem

unread,
Jun 22, 2020, 5:01:43 AM6/22/20
to sage-devel
Hi. Could we maybe turn off reproducible "randomness" in doctests by default. There are probably thousands of meaningless doctests because of this.


or the doctests for the namespace function `bernoulli` or the `number_of_partitions` function also in the namespace.

Alternatively we should add `set_random_seed` to all of those places.

Either way, this might be a huge project, because of possible bugs.

Jonathan

Dima Pasechnik

unread,
Jun 22, 2020, 5:24:59 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 10:01 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
Hi. Could we maybe turn off reproducible "randomness" in doctests by default. There are probably thousands of meaningless doctests because of this.


I don't understand, you say there that it went unnoticed due to a too easy doctest. I don't see how that one had anything to do with randomness.

Also, could you explain what you mean by "randomness" there.  Doctests are either tagged "random" (and do matching of answers skipped),

or use random inputs. If a doctest with random input produces inconsistent results due to not properly set seeds, it would have been caught all along.



or the doctests for the namespace function `bernoulli` or the `number_of_partitions` function also in the namespace.

Alternatively we should add `set_random_seed` to all of those places.

Either way, this might be a huge project, because of possible bugs.

Jonathan

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/6a5005ec-717e-400c-b253-514f9101bc72o%40googlegroups.com.

Jonathan Kliem

unread,
Jun 22, 2020, 5:58:06 AM6/22/20
to sage-devel
I had a doctest

            sage: ls = [randint(-100,100) for _ in range(4)]
            sage: intervals = [[x, x+randint(1,50)] for x in ls]
            sage: P = polytopes.hypercube(4, intervals, backend='field')

which then checked `P` for consistency. However it was somewhat meaningless (missing `set_random_seed()`) as all entries in `ls` where positive. It never caught a sign error that we introduced  in #28866.
Adding `set_random_seed` to it, does catch the error.

Now this check for consistency certainly wasn't very good in the first place, as two polyhedra might compare equal, even if one is set up incorrectly (I just noticed this because of your post, thank you). But it still would have caught the error even randomness was true randomness.

You might say, I should have known to add `set_random_seed()` in the first place, but I'm definitely not the only one who made that mistake and there are plenty of doctests that claim to test a random event, but they really test the same event all over and over.

Am Montag, 22. Juni 2020 11:24:59 UTC+2 schrieb Dima Pasechnik:


On Mon, 22 Jun 2020, 10:01 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
Hi. Could we maybe turn off reproducible "randomness" in doctests by default. There are probably thousands of meaningless doctests because of this.


I don't understand, you say there that it went unnoticed due to a too easy doctest. I don't see how that one had anything to do with randomness.

Also, could you explain what you mean by "randomness" there.  Doctests are either tagged "random" (and do matching of answers skipped),

or use random inputs. If a doctest with random input produces inconsistent results due to not properly set seeds, it would have been caught all along.



or the doctests for the namespace function `bernoulli` or the `number_of_partitions` function also in the namespace.

Alternatively we should add `set_random_seed` to all of those places.

Either way, this might be a huge project, because of possible bugs.

Jonathan

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-...@googlegroups.com.

Dima Pasechnik

unread,
Jun 22, 2020, 7:09:55 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 10:58 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
I had a doctest

            sage: ls = [randint(-100,100) for _ in range(4)]
            sage: intervals = [[x, x+randint(1,50)] for x in ls]
            sage: P = polytopes.hypercube(4, intervals, backend='field')

which then checked `P` for consistency. However it was somewhat meaningless (missing `set_random_seed()`) as all entries in `ls` where positive. It never caught a sign error that we introduced  in #28866.
Adding `set_random_seed` to it, does catch the error.

unless I misunderstand how Sage random numbers are set, by adding an explicit set_random_seed() call you merely  went with another bunch of pseudorandom values than these ones you were hitting without it, not that you were fixing a bug. In other words you just hit a more favourable  bunch of pseudorandom data.

To make such tests more robust, one needs to call them in a loop, without resetting the seed. Then you get a bit more of (pseudo) randomness.







Now this check for consistency certainly wasn't very good in the first place, as two polyhedra might compare equal, even if one is set up incorrectly (I just noticed this because of your post, thank you). But it still would have caught the error even randomness was true randomness.

You might say, I should have known to add `set_random_seed()` in the first place, but I'm definitely not the only one who made that mistake and there are plenty of doctests that claim to test a random event, but they really test the same event all over and over.

Am Montag, 22. Juni 2020 11:24:59 UTC+2 schrieb Dima Pasechnik:


On Mon, 22 Jun 2020, 10:01 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
Hi. Could we maybe turn off reproducible "randomness" in doctests by default. There are probably thousands of meaningless doctests because of this.


I don't understand, you say there that it went unnoticed due to a too easy doctest. I don't see how that one had anything to do with randomness.

Also, could you explain what you mean by "randomness" there.  Doctests are either tagged "random" (and do matching of answers skipped),

or use random inputs. If a doctest with random input produces inconsistent results due to not properly set seeds, it would have been caught all along.



or the doctests for the namespace function `bernoulli` or the `number_of_partitions` function also in the namespace.

Alternatively we should add `set_random_seed` to all of those places.

Either way, this might be a huge project, because of possible bugs.

Jonathan

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/6a5005ec-717e-400c-b253-514f9101bc72o%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/fcc3910a-e851-4f58-9a26-4eaf3dced699o%40googlegroups.com.

Jonathan Kliem

unread,
Jun 22, 2020, 7:18:09 AM6/22/20
to sage-devel
Currently, the doctest setup produces the very same random numbers on every instance.

If you call `set_random_seed()` first you will get somewhat different values on every single doctest run. It can still be annoying as not every bot will catch every failure, but it is by far more likely to catch regression eventually.

It seems that there are a number of doctests that are supposed to run on different values each time the doctest is run. However, currently exactly one list of values is tested each time.

It's not about getting very good pseudo randomness. It is just about getting some variation of values.

Michael Orlitzky

unread,
Jun 22, 2020, 7:21:12 AM6/22/20
to sage-...@googlegroups.com
For anyone who doesn't know what you're talking about, e.g.

sage: ZZ.random_element()

in a doctest always produces the same number.

I'm violently opposed to the existing behavior because it's caused me to
introduce several embarrassing bugs. It's counterintuitive and unique to
sage, so new users and reviewers never learn about it until it's too
late. And finally, the reasoning behind it is just silly. There are
plenty of ways to test random functions without being able to predict
their literal output. For example, instead of,

"""
Ensure that we can produce random integers::

sage: ZZ.random_element()
42
"""

One can easily use

"""
Ensure that we can produce random integers::

sage: ZZ.random_element() in ZZ
True
"""

Changing the behavior will break a lot of tests (as well as shed light
upon many bugs), but the working tests at least can be addressed with
set_random_seed(), in essence by removing their randomness again.

Dima Pasechnik

unread,
Jun 22, 2020, 7:26:24 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 12:18 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
Currently, the doctest setup produces the very same random numbers on every instance.

If you call `set_random_seed()` first you will get somewhat different values on every single doctest run. It can still be annoying as not every bot will catch every failure, but it is by far more likely to catch regression eventually.

there is no need to explicitly set the seed in these tests. Because it is done as Sage starts up.
By explicitly setting it you just restart the RNG, for no good reason.
There is no guarantee that this would give you "better" randomness.
If you want more random data in your doctest, repeat tests in a loop.

To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/f5004f9e-be47-47a2-9646-6e3eaf6ac1b6o%40googlegroups.com.

Jonathan Kliem

unread,
Jun 22, 2020, 7:35:16 AM6/22/20
to sage-devel
Yes there is need. Please go into any file of your choice and print out a "random" value in a doctest and see what happens.
There is not even a variation as Michael pointed out.

If you start sage in a shell, there is variation. But sage in a shell behaves different than in doctests. E.g. in doctests the dictionaries are being sorted, in the shell not.
And currently in doctests there is no variation at all in random numbers. You will always get the very same numbers.

Dima Pasechnik

unread,
Jun 22, 2020, 7:40:21 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 12:21 Michael Orlitzky, <mic...@orlitzky.com> wrote:
On 2020-06-22 05:01, 'Jonathan Kliem' via sage-devel wrote:
> Hi. Could we maybe turn off reproducible "randomness" in doctests by
> default. There are probably thousands of meaningless doctests because of
> this.
>
> E.g. https://trac.sagemath.org/ticket/29904
>
> or the doctests for the namespace function `bernoulli` or the
> `number_of_partitions` function also in the namespace.
>
> Alternatively we should add `set_random_seed` to all of those places.
>
> Either way, this might be a huge project, because of possible bugs.
>

For anyone who doesn't know what you're talking about, e.g.

  sage: ZZ.random_element()

in a doctest always produces the same number.

I'm violently opposed to the existing behavior because it's caused me to
introduce several embarrassing bugs.


to channel  anger about imperfections of Sage testing (which has nothing to do with its pseudorandom number generator), let me propose to investigate ways to get a Python fuzzing framework into  Sage testing framework.

It's counterintuitive and unique to
sage, so new users and reviewers never learn about it until it's too
late. And finally, the reasoning behind it is just silly. There are
plenty of ways to test random functions without being able to predict
their literal output. For example, instead of,

  """
  Ensure that we can produce random integers::

      sage: ZZ.random_element()
      42
  """

One can easily use

  """
  Ensure that we can produce random integers::

      sage: ZZ.random_element() in ZZ
      True
  """

Changing the behavior will break a lot of tests (as well as shed light
upon many bugs), but the working tests at least can be addressed with
set_random_seed(), in essence by removing their randomness again.

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/ed175873-fc41-812f-a92e-edbb2180ee9e%40orlitzky.com.

Dima Pasechnik

unread,
Jun 22, 2020, 7:43:39 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 12:35 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
Yes there is need. Please go into any file of your choice and print out a "random" value in a doctest and see what happens.

yes, this is by design. Otherwise one cannot get reproducible doctests.
To get more randomness you need to pull it out of the random source.
Randomness is about distributions, not particular values (sorry, I start to sound like I teach a statistics class :-))


To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/ac071eda-2f54-45a2-9434-5104341dcb36o%40googlegroups.com.

Jonathan Kliem

unread,
Jun 22, 2020, 7:45:09 AM6/22/20
to sage-devel
And in case you want reproducible doctests with values that look random you can always do something as `set_random_seed(0)`.

Jonathan Kliem

unread,
Jun 22, 2020, 7:56:51 AM6/22/20
to sage-devel
And this design has problems e.g. https://trac.sagemath.org/ticket/28599

Basic story is: The random splitting that is being done to find real roots is far from stable. Apparently the doctests don't catch on, because they always use the same "random state". In real life, you cannot obtain a regular polygon with 20 vertices with this method.

So some doctests claim that sage can do something, which can be totally wrong (unless you have your lucky day). Maybe if the doctests would have shown some variation, people designing this would have caught on that their random choice could be real bad and they would have to accounted for that.

Dima Pasechnik

unread,
Jun 22, 2020, 8:25:29 AM6/22/20
to sage-devel


On Mon, 22 Jun 2020, 12:56 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
And this design has problems e.g. https://trac.sagemath.org/ticket/28599

Basic story is: The random splitting that is being done to find real roots is far from stable. Apparently the doctests don't catch on, because they always use the same "random state". In real life, you cannot obtain a regular polygon with 20 vertices with this method.

So some doctests claim that sage can do something, which can be totally wrong (unless you have your lucky day). Maybe if the doctests would have shown some variation, people designing this would have caught on that their random choice could be real bad and they would have to accounted for that.

that's what fuzzing frameworks are for, for catching this sort of problems.
However, it would be interesting to try to apply

--- a/src/sage/doctest/forker.py
+++ b/src/sage/doctest/forker.py
@@ -848,7 +848,7 @@ class SageDocTestRunner(doctest.DocTestRunner, object):
             TestResults(failed=0, attempted=4)
         """
         self.setters = {}
-        randstate.set_random_seed(0)
+        randstate.set_random_seed()
         # scipy 1.18 introduced reprecation warnings on a number of things they are moving to
         # numpy, e.g. DeprecationWarning: scipy.array is deprecated
         #             and will be removed in SciPy 2.0.0, use numpy.array instead

and see how many doctests will fail.

To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/bdd1663f-9898-417a-890f-84d4ed25425bo%40googlegroups.com.

Dima Pasechnik

unread,
Jun 22, 2020, 9:03:33 AM6/22/20
to sage-devel
On Mon, Jun 22, 2020 at 1:25 PM Dima Pasechnik <dim...@gmail.com> wrote:
>
>
>
> On Mon, 22 Jun 2020, 12:56 'Jonathan Kliem' via sage-devel, <sage-...@googlegroups.com> wrote:
>>
>> And this design has problems e.g. https://trac.sagemath.org/ticket/28599
>>
>> Basic story is: The random splitting that is being done to find real roots is far from stable. Apparently the doctests don't catch on, because they always use the same "random state". In real life, you cannot obtain a regular polygon with 20 vertices with this method.
>>
>> So some doctests claim that sage can do something, which can be totally wrong (unless you have your lucky day). Maybe if the doctests would have shown some variation, people designing this would have caught on that their random choice could be real bad and they would have to accounted for that.
>
>
> that's what fuzzing frameworks are for, for catching this sort of problems.
> However, it would be interesting to try to apply
>
> --- a/src/sage/doctest/forker.py
> +++ b/src/sage/doctest/forker.py
> @@ -848,7 +848,7 @@ class SageDocTestRunner(doctest.DocTestRunner, object):
> TestResults(failed=0, attempted=4)
> """
> self.setters = {}
> - randstate.set_random_seed(0)
> + randstate.set_random_seed()
> # scipy 1.18 introduced reprecation warnings on a number of things they are moving to
> # numpy, e.g. DeprecationWarning: scipy.array is deprecated
> # and will be removed in SciPy 2.0.0, use numpy.array instead
>
> and see how many doctests will fail.

It's actually not too bad - I did try on few parts of Sage, and e.g.
sage/calculus/ passes, sage/geometry/ has 4 failures, sage/graphs/
around 20.

I checked that this simple change produces different data on different
runs, so this is exactly what to do.
Needless to say, doctests which produce random data needs adjustment, e.g. by
calling set_random_seed(0), or by making tests not to rely on a
particular randomness outcome.

I would prefer this to explicit calling of set_random_seed() in
doctests (i.e. it'd prefer implicit fuzzing of tests to explicit one).
I've opened https://trac.sagemath.org/ticket/29935 to get this in.

Dima

Michael Orlitzky

unread,
Jun 22, 2020, 9:18:11 AM6/22/20
to sage-...@googlegroups.com
On 2020-06-22 09:03, Dima Pasechnik wrote:
> On Mon, Jun 22, 2020 at 1:25 PM Dima Pasechnik <dim...@gmail.com> wrote:
>
> I would prefer this to explicit calling of set_random_seed() in
> doctests (i.e. it'd prefer implicit fuzzing of tests to explicit one).
> I've opened https://trac.sagemath.org/ticket/29935 to get this in.
>

I think this is far preferable to the status quo. For many functions,
full fuzz testing is undesirable. I don't want spend hours e.g.
computing polynomial determinants of every order up to N, but I might
like to compute one, to make sure it works. And if the particular
example I compute is irrelevant, it would be nice if I could compute a
"random" one of reasonable size. Doing so won't consume any extra time
on any individual machine, and the multitudes of reviewers and patchbots
running the test on different examples will ferret out any corner cases.

A lot of the existing tests can be fixed proactively, before we change
anything, to keep the branch diff somewhat small.

Nils Bruin

unread,
Jun 22, 2020, 1:37:59 PM6/22/20
to sage-devel
On Monday, June 22, 2020 at 6:18:11 AM UTC-7, Michael Orlitzky wrote:
Doing so won't consume any extra time
on any individual machine, and the multitudes of reviewers and patchbots
running the test on different examples will ferret out any corner cases.

One of the concerns is that you won't get very good error reports this way: without good condition reporting, you might get reports "sometimes this test fails" and in many cases people will probably not bother reporting because the problem went away the next time. In my experience, "corner cases" in mathematical algorithms are often NOT found with random input, because the probability measure of their triggering inputs is vanishingly small.

A cheap compromise might be to make the "starting seed" for tests configurable. The default would be just the seed we have now, but if people want to set it to another value, they can go ahead. It could be helpful if the value used is part of the test result report. If people want, they can then just script the testing with variable seeds. It would offer a cheap workaround to get a bit more coverage in your tests (at some point) if you think your code is particularly sensitive to output from the random generator.

Travis Scrimshaw

unread,
Jun 22, 2020, 8:41:04 PM6/22/20
to sage-devel
I agree with Nils about this, and I like his proposal. I also think the random number might (and should) change if other doctests are changed around it. However, having something that consistently fails is better than something that does only part of the time. I have had doctests failing in a file run as a standalone than within a folder due to a state being cached in a separate file. There are also some transient errors that occur when running the entire library tests. So having randomness in the doctests makes it harder to detect if these are actual bugs or not (in my previous example, it was not a bug in the code but in the doctest setup).

Best,
Travis


 

Michael Orlitzky

unread,
Jun 22, 2020, 8:54:28 PM6/22/20
to sage-...@googlegroups.com
On 2020-06-22 20:41, 'Travis Scrimshaw' via sage-devel wrote:
>
> I agree with Nils about this, and I like his proposal. I also think the
> random number might (and should) change if other doctests are changed
> around it. However, having something that consistently fails is better than
> something that does only part of the time. I have had doctests failing in a
> file run as a standalone than within a folder due to a state being cached
> in a separate file. There are also some transient errors that occur when
> running the entire library tests. So having randomness in the doctests
> makes it harder to detect if these are actual bugs or not (in my previous
> example, it was not a bug in the code but in the doctest setup).

The choice isn't between something that fails consistently and something
that fails randomly. Instead, the choice is between something that is
broken but never fails, and something that is broken but fails randomly.
Code that works is unaffected.

The proposal on the ticket is to make things "as reproducible as
possible" by choosing the seed at random, but displaying it when a test
fails. So afterwards you could do e.g.

$ sage -t --seed=8765309 foo.py

and have it re-run the tests with the same numbers. That will produce
consistent failures, and will let you know when you've fixed the bug.

But more to the point, it will prevent us from committing broken code in
the first place, which will obviate the need for any failures to be
reproducible because there won't be any failures =)
Reply all
Reply to author
Forward
0 new messages