That seems a slightly strange decision to me. From the link it is apparent the
code is still in use. To quote from there:
"sage/server/notebook/worksheet.py cannot be removed since the sources
under server/notebook/* are still used by SageNB for conversion from
the old pickled data storage format to the new one."
So why should it not be tested?
> Taking a cue from
> that, today's coverage update  contains good news.
This strikes me as rather fiddling things to make the statistics look better.
> I have blacklisted everything under sage/server in generating the new
> doctest reports at . This now means that as of Sage 4.4.4.alpha0,
> the overall weighted coverage score is 84.6%. Previously, the number
> of strategic modules was 180, i.e. we needed to get 180 modules up to
> full coverage in order to meet the 90% coverage goal. Now, the number
> of strategic modules is down to 140.
This reminds me of the saying:
"There are lies, damn lies, and statistics".
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to
> For more options, visit this group at
> URL: http://www.sagemath.org
Tim Joseph Dumol <tim (at) timdumol (dot) com>
Personally I'd beg to differ. A change in gcc's behavior could easily result in
the code acting differently, as could any number of other system changes.
Here are a few tickets for issues that result of just changing compiler versions.
* segfault in Sage-4.4 built using GCC-4.5.0
* frobby optional spkg doesn't build with newer GCC's
* GCC-4.5.0 breaks GAP -- the workspace is broken, hence gap('2+2') fails.
As far as I can see, all those bugs were a result of changing just compiler
Add to the mix the possibility of different versions of cython behaving
differently, and it seems a bad idea to me.
It's certainly not unknown for a doc test to fail on one machine but pass on
another. So having the same code never guarantees you get the same result.
Over the years, I've come across a lot of code which runs ok on fast computers,
but not on slow ones, or visa versa. One case I recall was someone being rather
stupid and seeing the random number generator from the time of day multiple
times in a loop. On a slow computer, the seed was effectively random each time
so they got a different pseudo random number. On a fast computer, the code
executed in less than a second, and so the RNG was seeded twice with hte same
Mathematica on Solaris had a bug when Solaris 10 was updated only on slow
So I've known all these to cause bugs, while the source code remains unchanged.
* Changes in compiler version
* Changes in the speed of the computer
* Upgrade of the operating system.
As one more final point, there are ports in progress to
* 64-bit on Solaris SPARC
All of them have the potential to create problems on one platform, not seen on
Can you dismiss all the above possibilities? If not, why should the code be
exempt from testing?
As for Cython and gcc, the Sage notebook uses pure Python. I do
acknowledge that there's a minuscule chance that a Python update could
change runtime behaviour.
It is worthwhile to note that the code under sage/server/* is only
used to be able to load old pickles of Sage notebooks, and the only
reasonable way I could think a Python update could mess this up is by
a change in pickle format (which is guaranteed against in Python
documentation). The code is not used for any other purpose aside from
But what is used to build python? - gcc of course! So we have *at least* the
following possibilities which could result in a problem.
* gcc update
* python update
* someone patching python (it is already at patch level 8 or so in Sage)
* operating system update
* port to another platform (Cygwin, OpenSolaris and FreeBSD are all being
* someone's computer may be mis-configured.
Less likely, but still not impossibe, would be the speed of someone's computer
(BSD.py was such an example), or any of numerous other things I can think of.
> It is worthwhile to note that the code under sage/server/* is only
> used to be able to load old pickles of Sage notebooks, and the only
> reasonable way I could think a Python update could mess this up is by
> a change in pickle format (which is guaranteed against in Python
> documentation). The code is not used for any other purpose aside from
Maybe, but it seems a poor idea to remove it to me the fact the code is still
used - even if only rarely used.
What do we gain this from removing this code from doctesting.
* Faster doctesting.
* Better looking statistics.
I know what I'd rather have.
Is there *any* other motivation for removing this from the testing, apart from
increasing the percentage of doctest coverage? If not, it boils down to
sacrificing quality for better looking statistics.
>> As for Cython and gcc, the Sage notebook uses pure Python. I do
>> acknowledge that there's a minuscule chance that a Python update could
>> change runtime behaviour.
> But what is used to build python? - gcc of course! So we have *at least*
> the following possibilities which could result in a problem.
> * gcc update
> * python update
> * someone patching python (it is already at patch level 8 or so in Sage)
> * operating system update
> * port to another platform (Cygwin, OpenSolaris and FreeBSD are all
> being worked on.)
> * someone's computer may be mis-configured.
I should have also considered anything that is needed to build python, could
change the behavior of python. That includes
* zlib - which will be updated in 4.4.4.alpha1
Some I accept are unlikely, but none are impossible.
> Is there *any* other motivation for removing this from the testing,
> apart from increasing the percentage of doctest coverage?
That I feel is the most important question.
> If not, it
> boils down to sacrificing quality for better looking statistics.
Which is what I think we are doing.
On Mon, Jun 14, 2010 at 9:47 PM, Dr. David Kirkby
But what is happening is even worst. There is now the assumption that "It's safe
to remove tests for existing code".
The only justification seems to be that it makes the numbers look better
I very, very strongly support Tim's position that we should not
include the old Sage notebook pickle code in the coverage score.
1) His arguments, namely that the code isn't actually used. It is
only the classes in the files that are used to load old notebook
pickles (the code isn't even run). At some point being able to load
these will be deprecated too.
2) Even if we were using that code, it would be foolish to include
it in the doctest coverage code. The standard "doctest" approach to
code testing makes a lot of sense for the math part of the Sage
library, which is meant to be used interactively. It is very awkward
to test the notebook server code, which is not used interactively at
the command line. Instead, for the notebook server, it is much better
to have (a) unit tests, and (b) use Selineum to do functional testing
from a browser.
3) Including the old notebook server code in the statistic
encourages people to write doctests for that code. If anybody did
this, it would be a complete and utter waste of valuable time.
That said, David does have a very good point, that via some weird
sequence of events the code in sage/server could get broken. Since
that code is supposed to serve exactly one purpose now, the one and
only test that we *should* have of that code is that it can be used to
unpickle an old Sage notebook directory. I've opened a ticket for
Alex G and I extended coverage on databases/cremona.py but did not
achieve 100%. The reason: there are functions here which are *only*
needed to rebuild the elliptic curve database from a tarball which I
can produce from my data files. At the time these functions were
written, I was extending (or amending) the database rather regularly,
and each time I sent William the updated tarball and he used these
functions to update the Sage database. (Or something very like that
happened.) But that has not happened for a while. If and when I
extend it further, then certainly it will be necessary to have those
functions updated and tested (and I will of course be willing to do
exactly that.) But it is absolutely not necessary for them to be
tested at every release of Sage.
I have yet to see any Python code that behaves differently on
different platforms except for code using platform dependent features,
numerical noise (and even then only on Sparc), or external recourses
(including timing). It's possible, but there's much more fruitful
things to test.
>> Is there *any* other motivation for removing this from the testing,
>> apart from increasing the percentage of doctest coverage?
> That I feel is the most important question.
>> If not, it boils down to sacrificing quality for better looking
> Which is what I think we are doing.
It's not just about better looking statistics, it's also about more
accurately reflecting the state of the codebase.
To clarify, in this case the reason that code needs to be there is
just to unpickle old notebooks, meaning that there needs to be a class
of a specific name in a specific module (again looked up by name) or
the old pickle won't know how to reconstruct itself. Most (all?) of
the actual code there could probably be deleted without affecting
unpickleability (todo once we have good tests of that).