Re: [sage-devel] pushing towards 90% doctest coverage for Sage 5.0

14 views
Skip to first unread message
Message has been deleted

Dr. David Kirkby

unread,
Jun 10, 2010, 5:21:51 PM6/10/10
to sage-...@googlegroups.com
On 06/10/10 09:25 PM, Minh Nguyen wrote:
> Hi folks,
>
> One of the main goals of the upcoming Sage 5.0 release is to get
> doctest coverage of the Sage library up to at least 90%. As of Sage
> 4.4.4.alpha0, the overall weighted coverage is 82.7%.

Seems we are a long way off.

It seems to me, rather than pick a number like 90%, the areas should be targeted
carefully.

> To get a sense
> of which modules in the Sage library need work on their coverage
> scores, you could use the coverage script as follows:
>
> $ ./sage -coverage /path/to/module.py[x]
>
> Or you could do the following to get the coverage scores of all
> modules, including a coverage summary:
>
> $ ./sage -coverageall
>
> You might be interested in knowing which modules have a certain
> coverage percentage, in which case you could save the output of
> -coverageall to a text file and then grep that file for certain
> coverage scores. At this repository [1] is a script to generate
> various types of coverage analysis reports. You can also find the
> script at [3]. The script currently supports the following reports
>
> * The coverage summary of all modules.
>
> * Modules with 100% coverage.
>
> * Modules with zero coverage.

I don't really understand these docs tests well, but to me at least,
concentrating on modules which have zero coverage would seem best, even if only
one doctest/module is added. My logic being something could be totally broken,
and we would never know about it. At least if there is even one doctest, it
shows it is not totally broken. (Though one could argue being totally broken is
better than being partially broken. At least one finds out in use.)

It was recently discovered that certain modules (matrix, class, mgcv, nnet,
rpart, spatial, and survival) in R are not building on Solaris.

http://trac.sagemath.org/sage_trac/ticket/9201

Had there been even one doctest for each module, this would have been obvious.

There is an interesting information about the SQlite database (incluced in Sage)
is tested. The number of lines of test code is 679 times that of the actual
source code of the project. The number of lines of test code is 45.7 million,
the number of lines of source code in the database is 67000! (Both exclude
comments and blank lines)

http://www.sqlite.org/testing.html

I think their test procedures are a bit over the top, but it certainly brings in
to perspective how some developers feel about testing.

If what is written at

http://reference.wolfram.com/mathematica/tutorial/TestingAndVerification.html

about testing Mathematica is true, then the following statement is intesting

"There is also a special instrumented version of Mathematica which is set up to
perform internal consistency tests. This version of Mathematica runs at a small
fraction of the speed of the real Mathematica, but at every step it checks
internal memory consistency, interruptibility, and so on"

I must admit, reading that Wolfram Research page, the statement that "The
standards of correctness for Mathematica are certainly much higher than for
typical mathematical proofs" is extremely stupid, when they don't define
"typical" and they provide no evidence of it. (It was not me he spotted that,
but it is extremely dumb thing to write)

Of course we can't verify the claims made by Wolfram Reserach, but we can verify
what the SQlite developers say, that the number of lines of test code is 679 x
the amount of actual code in the database.

Even I would refuse to write 679 lines of test code for every line of code I
wrote! But really the specification, implementation and testing should be done
by different people. In practice, that is not going to happen in Sage, though I
would not be surprised if that happens with Mathematica, since it is pretty
standard technique in software engineering.

Dave

William Stein

unread,
Jun 10, 2010, 5:27:01 PM6/10/10
to sage-...@googlegroups.com
On Thu, Jun 10, 2010 at 2:21 PM, Dr. David Kirkby
<david....@onetel.net> wrote:
> On 06/10/10 09:25 PM, Minh Nguyen wrote:
>>
>> Hi folks,
>>
>> One of the main goals of the upcoming Sage 5.0 release is to get
>> doctest coverage of the Sage library up to at least 90%. As of Sage
>> 4.4.4.alpha0, the overall weighted coverage is 82.7%.
>
> Seems we are a long way off.
>
> It seems to me, rather than pick a number like 90%, the areas should be
> targeted carefully.

The goal for Sage-5.0 is 90% coverage. And we should aim for 100%
by the end of the year.

-- William

Dr. David Kirkby

unread,
Jun 10, 2010, 6:32:46 PM6/10/10
to sage-...@googlegroups.com

Consider two areas

# interfaces/tachyon.py: 0% (0 of 4)
# graphs/generic_graph.py: 99% (200 of 201)

Where would it be most useful to add one doc test?

At least from my very little understanding of this, Having 89% coverage would be
better than 90% coverage, if those 89% were well targeted.

Dave


Simon King

unread,
Jun 10, 2010, 7:38:25 PM6/10/10
to sage-devel
Hi David,

On 11 Jun., 00:32, "Dr. David Kirkby" <david.kir...@onetel.net> wrote:
> At least from my very little understanding of this, Having 89% coverage would be
> better than 90% coverage, if those 89% were well targeted.

It is not clear to me why one module should be considered being more
important than another. I would not support if you suggest targeting
the effort by the *topic* of the modules being doc tested. Arguing
like "many people do calculus and graphics, so, concentrate on this"
will only lead to a meta-discussion.

Or do you just say that adding a single doc test to a module with 0%
coverage will have a better impact than adding a single test for a
module with 70% coverage? This might indeed be a good way to find a
starting point.

Anyway, I am +1 to trying and getting a 90% overall doc test coverage;
it is a valuable aim.
IMO it is *always* worth it to write doc tests since it is very likely
to uncover flaws (in particular if the person who writes the test is
not the same as the one who wrote the code).

My 0.02€
Simon

Florent Hivert

unread,
Jun 10, 2010, 8:07:27 PM6/10/10
to sage-...@googlegroups.com
Hi Minh,

> And you're done. Here [2] is a report generated by the script. The
> idea is to provide an overview of which modules need work. I'd be
> interested to know what other types of doctest coverage reports people
> would like to see. Comments, suggestions, critiques, etc. are welcome.

This reports definitely looks like a good idea ! However, I tried to pick-up
some random files in the lists:

sage/monoids/monoid.py
sage/structure/element_py.py
sage/structure/element_verify.py
sage/misc/typecheck.py

They all looks like they should be deprecated and removed... If it's true I
rather improving the doctest coverage by removing them than adding
doctests... However I'd like to have the confirmation that they are indeed
obsolete...

This remark also hold for the following typecheck function in
sage/misc/misc.py
#################################################################
# Type checking
#################################################################
def typecheck(x, C, var="x"):
"""
Check that x is of instance C. If not raise a TypeError with an
error message.
"""
if not isinstance(x, C):
raise TypeError, "%s (=%s) must be of type %s."%(var,x,C)


Cheers,

Florent

Dr. David Kirkby

unread,
Jun 10, 2010, 8:20:52 PM6/10/10
to sage-...@googlegroups.com
On 06/11/10 12:38 AM, Simon King wrote:
> Hi David,
>
> On 11 Jun., 00:32, "Dr. David Kirkby"<david.kir...@onetel.net> wrote:
>> At least from my very little understanding of this, Having 89% coverage would be
>> better than 90% coverage, if those 89% were well targeted.
>
> It is not clear to me why one module should be considered being more
> important than another. I would not support if you suggest targeting
> the effort by the *topic* of the modules being doc tested. Arguing
> like "many people do calculus and graphics, so, concentrate on this"
> will only lead to a meta-discussion.

I would not say there is no logic to that, but it was not what I was suggesting.

> Or do you just say that adding a single doc test to a module with 0%
> coverage will have a better impact than adding a single test for a
> module with 70% coverage? This might indeed be a good way to find a
> starting point.

That was exactly what I meant. In the case of the interface to tachyon, it would
appear there is absolutely no testing of this whatsoever

# interfaces/tachyon.py: 0% (0 of 4)

so adding just one test would be useful. It would at least show the interface is
not completely broken.

In contrast, getting graphs/generic_graph.py up to 100%, by adding one test when
there are already 200, would seem less useful to me.

# graphs/generic_graph.py: 99% (200 of 201)

So I would suggest that doctests are targeted, rather than randomly chosen. (The
obvious way to get the coverage up quickly is to write simple tests.)

> Anyway, I am +1 to trying and getting a 90% overall doc test coverage;
> it is a valuable aim.

Yes, but IMHO, it is a bit of a simplistic metric.

I personally do not feel Sage is sufficiently tested, and I doubt my opinion
will change if the doctest coverage is increased to 100%.

> IMO it is *always* worth it to write doc tests since it is very likely
> to uncover flaws (in particular if the person who writes the test is
> not the same as the one who wrote the code).

Really, that should be the case.

http://www.sqlite.org/testing.html

is worth a read. Particularly section 7.1 about how there are various ways of
defining test coverage. (I stuck section 7.1 below my name). What is clear is
that the method of calculating coverage in Sage is even weaker than what that
sqlite document consider to be a weak method (statement coverage). We don't even
ensure that every statement of code gets executed at least once.

Dave

Section 7.1 from http://www.sqlite.org/testing.html

=============================================================
There are many ways to measure test coverage. The most popular metric is
"statement coverage". When you hear someone say that their program as "XX% test
coverage" without further explanation, they usually mean statement coverage.
Statement coverage measures what percentage of lines of code are executed at
least once by the test suite.

Branch coverage is more rigorous than statement coverage. Branch coverage
measure the number of machine-code branch instructions that are evaluated at
least once on both directions.

To illustrate the difference between statement coverage and branch coverage,
consider the following hypothetical line of C code:

if( a>b && c!=25 ){ d++; }

Such a line of C code might generate a dozen separate machine code instructions.
If any one of those instructions is ever evaluated, then we say that the
statement has been tested. So, for example, it might be the case that the
conditional expression is always false and the "d" variable is never
incremented. Even so, statement coverage counts this line of code as having been
tested.

Branch coverage is more strict. With branch coverage, each test and each
subblock within the statement is considered separately. In order to achieve 100%
branch coverage in the example above, there must be at least three test cases:

* a<=b
* a>b && c==25
* a>b && c!=25

Any one of the above test cases would provide 100% statement coverage but all
three are required for 100% branch coverage. Generally speaking, 100% branch
coverage implies 100% statement coverage, but the converse is not true. To
reemphasize, the TH3 test harness for SQLite provides the stronger form of test
coverage - 100% branch test coverage.
====================================================

> My 0.02�
> Simon


Dave

kcrisman

unread,
Jun 10, 2010, 9:19:44 PM6/10/10
to sage-devel

> That was exactly what I meant. In the case of the interface to tachyon, it would
> appear there is absolutely no testing of this whatsoever
>
> # interfaces/tachyon.py: 0% (0 of 4)

Though there is some testing of it in the plot files. The point is
good, just not for that example.

> In contrast, getting graphs/generic_graph.py up to 100%, by adding one test when
> there are already 200, would seem less useful to me.
>
> # graphs/generic_graph.py: 99% (200 of 201)
>
> So I would suggest that doctests are targeted, rather than randomly chosen. (The
> obvious way to get the coverage up quickly is to write simple tests.)

But we do want doctests to test a fairly large number of the options,
so just adding doctest coverage isn't enough. The doctests need to
really test, as you say. Getting in more of the framework tests in
(which I don't quite understand) will help, as would real
randomization (which has been often discussed but perhaps is hard to
implement?).

Florent's point is also very good. There are a number of files which
are no longer needed (e.g., axes.py) which are nearly undoctested.

- kcrisman

Robert Miller

unread,
Jun 10, 2010, 10:34:57 PM6/10/10
to sage-...@googlegroups.com
Minh,

Can you make a report which lists the files which, if brought up to
100% coverage, would benefit overall coverage the most?

On Thu, Jun 10, 2010 at 1:25 PM, Minh Nguyen <nguye...@gmail.com> wrote:
> Hi folks,
>
> One of the main goals of the upcoming Sage 5.0 release is to get
> doctest coverage of the Sage library up to at least 90%. As of Sage

> 4.4.4.alpha0, the overall weighted coverage is 82.7%. To get a sense


> of which modules in the Sage library need work on their coverage
> scores, you could use the coverage script as follows:
>
> $ ./sage -coverage /path/to/module.py[x]
>
> Or you could do the following to get the coverage scores of all
> modules, including a coverage summary:
>
> $ ./sage -coverageall
>
> You might be interested in knowing which modules have a certain
> coverage percentage, in which case you could save the output of
> -coverageall to a text file and then grep that file for certain
> coverage scores. At this repository [1] is a script to generate
> various types of coverage analysis reports. You can also find the
> script at [3]. The script currently supports the following reports
>
> * The coverage summary of all modules.
>
> * Modules with 100% coverage.
>
> * Modules with zero coverage.
>

> * Modules with between 1% and 9% coverage.
>
> * Modules with between 10% and 19% coverage.
>
> * Modules with between 20% and 29% coverage.
>
> * Modules with between 30% and 39% coverage.
>
> * Modules with between 40% and 49% coverage.
>
> * Modules with between 50% and 59% coverage.
>
> * Modules with between 60% and 69% coverage.
>
> * Modules with between 70% and 79% coverage.
>
> * Modules with between 80% and 89% coverage.
>
> * Modules with between 90% and 99% coverage.
>
> Each report has links to detailed reports for individual modules. To
> run the script, copy it to the SAGE_ROOT of a Sage source or binary
> installation and do
>
> [mvngu@sage sage-4.4.4.alpha0]$ ./coverage-status.py
> Coverage report of all modules...
> Summary of doctest coverage...
> Modules with 0% coverage...
> Modules with 100% coverage...
> Coverage reports within certain ranges...
> Detailed coverage report for all modules...
> Format the detailed coverage reports...
> Format the summary reports...
> Generate index.html...


>
> And you're done. Here [2] is a report generated by the script. The
> idea is to provide an overview of which modules need work. I'd be
> interested to know what other types of doctest coverage reports people
> would like to see. Comments, suggestions, critiques, etc. are welcome.
>
>

> [1] http://bitbucket.org/mvngu/coverage
>
> [2] http://sage.math.washington.edu/home/mvngu/doctest-coverage/
>
> [3] http://sage.math.washington.edu/home/mvngu/apps/coverage/
>
> --
> Regards
> Minh Van Nguyen
>
> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to sage-devel+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org
>

--
Robert L. Miller
http://www.rlmiller.org/

Jason Grout

unread,
Jun 10, 2010, 11:41:13 PM6/10/10
to sage-...@googlegroups.com
On 6/10/10 7:20 PM, Dr. David Kirkby wrote:

> We don't even ensure that every statement of code
> gets executed at least once.

Mike Hansen posted some code to use a tool to check that (a long time
ago). I imagine that after doctest coverage is up to 100% function
coverage that there will be a new push to then get the statement
coverage up to 100%. It would be interesting even now to see how much
statement coverage lagged behind function coverage.

Jason

Andrey Novoseltsev

unread,
Jun 11, 2010, 12:47:29 AM6/11/10
to sage-devel
On Jun 10, 9:41 pm, Jason Grout <jason-s...@creativetrax.com> wrote:
> I imagine that after doctest coverage is up to 100% function
> coverage that there will be a new push to then get the statement
> coverage up to 100%.  It would be interesting even now to see how much
> statement coverage lagged behind function coverage.

Where should such tests go? I am not sure that it is desirable to show
50 sophisticated examples for a single function in the interactive or
compiled help. On the other hand, I really like when all the tests are
right next to the body of the function. Is it possible to, say, have
"EXAMPLES" and "TESTS" sections in the docstring and avoid showing
"TESTS" by default? For the Sphynx documentation it would be nice to
have a way to either "expand" this section on demand or have a link to
the file with tests, in case someone really wants to see them...

Thank you,
Andrey

John H Palmieri

unread,
Jun 11, 2010, 1:34:36 AM6/11/10
to sage-devel
On Jun 10, 9:47 pm, Andrey Novoseltsev <novos...@gmail.com> wrote:
> On Jun 10, 9:41 pm, Jason Grout <jason-s...@creativetrax.com> wrote:
>
> > I imagine that after doctest coverage is up to 100% function
> > coverage that there will be a new push to then get the statement
> > coverage up to 100%.  It would be interesting even now to see how much
> > statement coverage lagged behind function coverage.
>
> Where should such tests go?

They can go in separate files, files which, for example, are not
included in the reference manual. The file sage/homology/tests.py is
an example. Each function should have doctests (so the goal is still
100% coverage), but it's not a big deal to relegate lots of technical
test to less visible places.

--
John

Robert Bradshaw

unread,
Jun 11, 2010, 1:48:13 AM6/11/10
to sage-...@googlegroups.com

Personally, I would much rather put the relevant tests locally right
in the docstring and hide them from the documentation generators.
Especially as TESTS blocks often test corner cases or other
technicalities relevant to that specific function.

- Robert

Robert Bradshaw

unread,
Jun 11, 2010, 1:54:53 AM6/11/10
to sage-...@googlegroups.com
On Jun 10, 2010, at 2:21 PM, Dr. David Kirkby wrote:

> On 06/10/10 09:25 PM, Minh Nguyen wrote:
>> Hi folks,
>>
>> One of the main goals of the upcoming Sage 5.0 release is to get
>> doctest coverage of the Sage library up to at least 90%. As of Sage
>> 4.4.4.alpha0, the overall weighted coverage is 82.7%.
>
> Seems we are a long way off.
>
> It seems to me, rather than pick a number like 90%, the areas should
> be targeted carefully.

90% is a very nice global goal to have--it's something concrete to
shoot for while still letting everyone work on what they like/think is
most important. That being said, I agree with you that I'd rather
people write tests for more valuable areas rather than easy ones is
certainly worthwhile.

>> To get a sense
>> of which modules in the Sage library need work on their coverage
>> scores, you could use the coverage script as follows:
>>
>> $ ./sage -coverage /path/to/module.py[x]
>>
>> Or you could do the following to get the coverage scores of all
>> modules, including a coverage summary:
>>
>> $ ./sage -coverageall
>>
>> You might be interested in knowing which modules have a certain
>> coverage percentage, in which case you could save the output of
>> -coverageall to a text file and then grep that file for certain
>> coverage scores. At this repository [1] is a script to generate
>> various types of coverage analysis reports. You can also find the
>> script at [3]. The script currently supports the following reports
>>
>> * The coverage summary of all modules.
>>
>> * Modules with 100% coverage.
>>
>> * Modules with zero coverage.
>
> I don't really understand these docs tests well, but to me at least,
> concentrating on modules which have zero coverage would seem best,
> even if only one doctest/module is added. My logic being something
> could be totally broken, and we would never know about it. At least
> if there is even one doctest, it shows it is not totally broken.
> (Though one could argue being totally broken is better than being
> partially broken. At least one finds out in use.)

+1, though of course some files (e.g. tachyon) are indirectly tested,
so it's hard to tell just looking at the numbers alone.

- Robert

Robert Bradshaw

unread,
Jun 11, 2010, 1:56:20 AM6/11/10
to sage-...@googlegroups.com

That would be very interesting indeed. I don't think it'll be too long
before we have line profilers fully supported (and hence coverage
tools too) for Cython as well as Python code. I'm not sure what tools
there are to deal with branch coverage.

- Robert

Message has been deleted

Florent Hivert

unread,
Jun 11, 2010, 4:46:48 AM6/11/10
to sage-...@googlegroups.com
>> They can go in separate files, files which, for example, are not
>> included in the reference manual. The file sage/homology/tests.py is
>> an example. Each function should have doctests (so the goal is still
>> 100% coverage), but it's not a big deal to relegate lots of technical
>> test to less visible places.
>
> Personally, I would much rather put the relevant tests locally right in the
> docstring and hide them from the documentation generators. Especially as
> TESTS blocks often test corner cases or other technicalities relevant to
> that specific function.

+1 Sphinx can certainly handles hiding of TESTS if asked to.

Florent

Florent Hivert

unread,
Jun 11, 2010, 5:02:28 AM6/11/10
to sage-...@googlegroups.com
Hi There,

>>> We don't even ensure that every statement of code
>>> gets executed at least once.
>>
>> Mike Hansen posted some code to use a tool to check that (a long time
>> ago). I imagine that after doctest coverage is up to 100% function
>> coverage that there will be a new push to then get the statement coverage
>> up to 100%. It would be interesting even now to see how much statement
>> coverage lagged behind function coverage.
>
> That would be very interesting indeed. I don't think it'll be too long
> before we have line profilers fully supported (and hence coverage tools
> too) for Cython as well as Python code. I'm not sure what tools there are
> to deal with branch coverage.

+10 !!! See my message on the same thread. There are several files in sage
which are never executed. Having an automatic way to check that could be a
good way to find them, deprecate and remove them. As I said, I'd better remove
unused code that adding doctests to it.

Cheers,

Florent

Message has been deleted
Message has been deleted
Message has been deleted

Jason Grout

unread,
Jun 11, 2010, 5:30:38 AM6/11/10
to sage-...@googlegroups.com
On 6/11/10 4:19 AM, Minh Nguyen wrote:
> Hi Florent,
>
> On Fri, Jun 11, 2010 at 10:07 AM, Florent Hivert
> <florent...@univ-rouen.fr> wrote:
>
> <SNIP>

>
>> They all looks like they should be deprecated and removed... If it's true I
>> rather improving the doctest coverage by removing them than adding
>> doctests... However I'd like to have the confirmation that they are indeed
>> obsolete...
>
> We are aiming for a Sage 5.0 release. The major version number says
> it: don't expect backwards compatibility. So whatever functions,
> methods, classes, modules that are deprecated should be removed in
> Sage 5.0. Now it's time to take stock of deprecated stuff that are
> over 60 months old

60 months old?? How about 6-12 months?

Jason

Message has been deleted
Message has been deleted

Florent Hivert

unread,
Jun 11, 2010, 5:33:50 AM6/11/10
to Sage Devel
Hi Minh,

> > They all looks like they should be deprecated and removed... If it's true I
> > rather improving the doctest coverage by removing them than adding
> > doctests... However I'd like to have the confirmation that they are indeed
> > obsolete...
>

> We are aiming for a Sage 5.0 release. The major version number says
> it: don't expect backwards compatibility. So whatever functions,
> methods, classes, modules that are deprecated should be removed in
> Sage 5.0. Now it's time to take stock of deprecated stuff that are

> over 60 months old and plan to remove them. Let's be brutal wherever
> we can :-)

I like this way of seeing. However, I was speaking about module or functions
which are not tested nor deprecated and nowhere used into sage (easy to check
using grep). Does it make sens to remove them without a deprecation warning ?
Many code seems to had been put here, just in case it is useful, and was never
used by the sage lib itself, but maybe by some users...

Do we agree on the policy:

- If a user need a code, he should take care to document and test it.
- Corollary: any code which is not tested, nor used can be safely removed
without a deprecation warning.

Cheers,

Florent

Message has been deleted
Message has been deleted
Message has been deleted

Florent Hivert

unread,
Jun 11, 2010, 7:28:58 AM6/11/10
to sage-...@googlegroups.com
Hi,

> > Where should such tests go? I am not sure that it is desirable to show
> > 50 sophisticated examples for a single function in the interactive or
> > compiled help. On the other hand, I really like when all the tests are
> > right next to the body of the function. Is it possible to, say, have
> > "EXAMPLES" and "TESTS" sections in the docstring and avoid showing
> > "TESTS" by default? For the Sphynx documentation it would be nice to
> > have a way to either "expand" this section on demand or have a link to
> > the file with tests, in case someone really wants to see them...
>

> Here are some possibilities:
>
> * Define test functions with the name format
> "_test_<rest-of-function-name>" and put test code in there. This way,
> the test code and function won't show up in the reference manual.

Be careful ! Right now function named _test_* are those which are
automatically launched by TestSuite and therefore should have a very specific
prototype. So you probably want to pick a different name.

Cheers,

Florent

Florent Hivert

unread,
Jun 11, 2010, 7:32:25 AM6/11/10
to sage-...@googlegroups.com
Hi,

> > I like this way of seeing. However, I was speaking about module or functions
> > which are not tested nor deprecated and nowhere used into sage (easy to check
> > using grep). Does it make sens to remove them without a deprecation warning ?
> > Many code seems to had been put here, just in case it is useful, and was never
> > used by the sage lib itself, but maybe by some users...
> >
> > Do we agree on the policy:
> >
> > - If a user need a code, he should take care to document and test it.
> > - Corollary: any code which is not tested, nor used can be safely removed
> > without a deprecation warning.
>

> There are functions, classes, methods that were introduced into the
> Sage library well before the policy of 100% doctest coverage was
> implemented and os completely lack testing. It can be difficult to
> know if a piece of orphaned code should be removed. I think we need to
> consider your proposed policy on a module by module basis.

Ok ! That was the purpose of my first e-mail. Let's do it:

sage/monoids/monoid.py
sage/structure/element_py.py
sage/structure/element_verify.py
sage/misc/typecheck.py

They all looks like they should be deprecated and removed... If it's true I
rather improving the doctest coverage by removing them than adding
doctests... However I'd like to have the confirmation that they are indeed
obsolete...

This remark also hold for the following typecheck function in


sage/misc/misc.py
#################################################################
# Type checking
#################################################################
def typecheck(x, C, var="x"):
"""
Check that x is of instance C. If not raise a TypeError with an
error message.
"""
if not isinstance(x, C):
raise TypeError, "%s (=%s) must be of type %s."%(var,x,C)

Florent

Robert Miller

unread,
Jun 11, 2010, 7:47:16 AM6/11/10
to sage-...@googlegroups.com
Minh,

On Fri, Jun 11, 2010 at 2:49 AM, Minh Nguyen <nguye...@gmail.com> wrote:
> Here is my understanding of what you want. Let's say the Sage
> community has enough time to write tests for 20 modules. Which 20
> modules could we choose to write tests for such that it results in the
> greatest overall weighted coverage for the Sage library. In that case,
> I think such a report can be implemented. Please tell me if I have
> misunderstood your feature request.

Yes, exactly. Or 5 modules, or 100. I want to go down the list and
start writing doctests for the first module I see there which I feel
relatively comfortable working on.

Message has been deleted

Andrey Novoseltsev

unread,
Jun 11, 2010, 10:57:40 AM6/11/10
to sage-devel
On Jun 11, 2:46 am, Florent Hivert <florent.hiv...@univ-rouen.fr>
wrote:
That would be so awesome! In principle, I probably should implement
it, since I really want it, but in practice it will be time consuming
with my (lack of) knowledge of Sphynx and I already got quite a few
other things going. But if someone implements it, I will be very-very-
very happy!

Andrey

Robert Bradshaw

unread,
Jun 11, 2010, 11:15:32 AM6/11/10
to sage-...@googlegroups.com

That was the original goal of the TESTS block being distinct from
EXAMPLES. I don't think it's a matter of if, rather when.

- Robert


Florent Hivert

unread,
Jun 11, 2010, 11:21:09 AM6/11/10
to sage-...@googlegroups.com
Hi Minh,

Thanks for carefully investigating those:

> > sage/monoids/monoid.py
>
> I think this module should stay put. Here is a dependency chart based
> on that module:
>
> monoids.monoid.Monoid_class --> monoids.free_monoid.FreeMonoid_class
> --> monoids.string_monoid.StringMonoid_class
>
> where the relation
>
> a --> b
>
> means that a is used by b. From
> monoids.string_monoid.StringMonoid_class, various alphabets are
> defined. These alphabets are extensively used in various cryptography
> modules, e.g.
>
> * crypto/classical_cipher.py
> * crypto/classical.py
> * crypto/cryptosystem.py
> * crypto/stream_cipher.py
> * crypto/util.py
> * crypto/public_key/blum_goldwasser.py
> * crypto/block_cipher/miniaes.py
> * crypto/block_cipher/sdes.py

Right now your are right ! However if you read the doc, it says that it should
be removed after making a proper use of categories... So it should be
deprecated in the long run...

> > sage/structure/element_py.py
>
> I have looked through that file and have doctested the Sage
> 4.4.4.alpha0 library with that file removed. So far no failures, which
> doesn't mean that structure/element_py.py isn't used anywhere else.

One more indication: there isn't any reference to element_py anywhere in sage:

tomahawk-*el/sage-combinat $ grep element_py **/*.py*
Fichier binaire build/sage/structure/element_py.pyc concorde


> Indeed, a complaint with strong language. But you get the general idea
> about having good documentation. For reference, I have collected some
> resources [2] on how to write good documentation.

Sure !!! You don't need to convince me.

> > sage/structure/element_verify.py
>
> I get the feeling that this module is about code analysis for Cython
> modules. I think it might come in handy later on when someone needs to
> write a code analysis framework. They might not use all of
> structure/element_verify.py, but I suspect that that module could
> provide a starting point on how to proceed.

It seems to me completely redudant with the TestSuite/category framework. Once
again let's keep it until things got adapted to those framework.

> > sage/misc/typecheck.py
>
> As with structure/element_py.py, the module misc/typecheck.py just
> makes me feel stupid when it comes time for me to try to use it. A
> tool that makes one feel stupid isn't making a good impression on that
> person, and would drive the person away from using it. To me, the
> module is very under-documented, doesn't follow PEP 008 coding
> conventions, and very badly formatted. I let others in the Sage
> community form their own judgements about the removal of this module.


>
>
> > This remark also hold for the following typecheck function in
> > sage/misc/misc.py
> > #################################################################
> > # Type checking
> > #################################################################
> > def typecheck(x, C, var="x"):
> > """
> > Check that x is of instance C. If not raise a TypeError with an
> > error message.
> > """
> > if not isinstance(x, C):
> > raise TypeError, "%s (=%s) must be of type %s."%(var,x,C)
>

> This looks to me like, how can I put it, redundant code. I'm basing my
> judgement on the idea that I would write such code at the point where
> I want to do sanity checking, instead of calling a function that does
> the same thing, but doesn't allow me to customize my exception
> message. I would personally vote for removal of this function.

Let's wait for more vote...

Cheers,

Florent

Robert Bradshaw

unread,
Jun 11, 2010, 11:23:39 AM6/11/10
to sage-...@googlegroups.com
On Jun 11, 2010, at 2:33 AM, Florent Hivert wrote:

> Hi Minh,
>
>>> They all looks like they should be deprecated and removed... If
>>> it's true I
>>> rather improving the doctest coverage by removing them than adding
>>> doctests... However I'd like to have the confirmation that they
>>> are indeed
>>> obsolete...
>>
>> We are aiming for a Sage 5.0 release. The major version number says
>> it: don't expect backwards compatibility. So whatever functions,
>> methods, classes, modules that are deprecated should be removed in
>> Sage 5.0. Now it's time to take stock of deprecated stuff that are
>> over 60 months old and plan to remove them. Let's be brutal wherever
>> we can :-)

This brings up an orthogonal issue--if we're going to try to get lots
of publicity for 5.0, wouldn't it be better to focus on stability than
use the chance to break backwards compatibility?

> I like this way of seeing. However, I was speaking about module or
> functions
> which are not tested nor deprecated and nowhere used into sage (easy
> to check
> using grep). Does it make sens to remove them without a deprecation
> warning ?
> Many code seems to had been put here, just in case it is useful, and
> was never
> used by the sage lib itself, but maybe by some users...
>
> Do we agree on the policy:
>
> - If a user need a code, he should take care to document and test it.
> - Corollary: any code which is not tested, nor used can be safely
> removed
> without a deprecation warning.

The only code without doctests is (should be) old code that went in
before the 100% doctest policy, so I don't think we can just go and
delete things like this (especially really old code, which often but
not always has lots of stuff sitting on top of it.)

> sage/monoids/monoid.py

I'm pretty sure this is actually used.

> sage/structure/element_py.py

Deprecate for sure.

> sage/structure/element_verify.py

Probably deprecate, putting any relevant functionality into the
coverage script.

> sage/misc/typecheck.py

This was added just last version? Perhaps ask the author Dmitry
Dvoinikov.

Note that just deleting a file and running tests isn't sufficient, as
one would have to purge the .py and .pyc (or .so) files from all the
build directories as well.

- Robert

Florent Hivert

unread,
Jun 11, 2010, 11:29:47 AM6/11/10
to sage-...@googlegroups.com
Hi Andrey,

Looks like you asking me to do it ;-) I probably can do that but not before
finishing the current sphinx patch #9128 before starting a new one. By the
way, if any sphinx expert (Mike ???) can have help reviewing this patch, I
would really appreciate. My sphinx expertise in improving, but I can't say
that I'm exactly sure what I am doing there.

Cheers,

Florent

Message has been deleted

John Cremona

unread,
Jun 12, 2010, 9:38:52 AM6/12/10
to sage-...@googlegroups.com
Is there still a wiki page for people to sign up to deal with one or
more of these? Or a standard for trac ticket titles to ensure that
effort is not duplicated?

I intend to deal with interfaces/mwrank.py (2/10) and
databases/cremona.py (17/40) (at least to start with!).

John

On 12 June 2010 05:26, Minh Nguyen <nguye...@gmail.com> wrote:
> Hi Robert,
>
> On Fri, Jun 11, 2010 at 9:47 PM, Robert Miller <r...@rlmiller.org> wrote:
>
> <SNIP>


>
>> Yes, exactly. Or 5 modules, or 100. I want to go down the list and
>> start writing doctests for the first module I see there which I feel
>> relatively comfortable working on.
>

> See the updated coverage report at
>
> http://sage.math.washington.edu/home/mvngu/doctest-coverage/
>
> It now has a section called "Strategic reports" that has various lists
> of modules. For example, the first list is a strategic list of 180
> modules, if all of which were to have full coverage, then the 90%
> coverage goal would be met. At the moment, 180 modules is the lowest I
> could get with a quick-and-dirty approach via itertools.combinations()
> and then get the very first list that satisfies the 90% goal. The
> problem essentially boils down to the subset sum problem. But my
> approach so far has been quick-and-dirty. I wanted to present an
> overview of where in the Sage library one could devote attention to in
> working towards the 90% goal of Sage 5.0. There are 3 processes
> running on mod.math at the moment, trying to decrease the list down to
> say 170, 160, and 150 strategic modules. From the look of it, it could
> take hours or days before any of these three processes return a lower
> strategic list than 180 modules.
>
> --
> Regards
> Minh Van Nguyen
>
> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to sage-devel+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org
>

Alex Ghitza

unread,
Jun 12, 2010, 9:43:58 AM6/12/10
to John Cremona, sage-...@googlegroups.com

Hi John,

I don't have an answer to your questions, but...

On Sat, 12 Jun 2010 14:38:52 +0100, John Cremona <john.c...@gmail.com> wrote:
> I intend to deal with interfaces/mwrank.py (2/10) and
> databases/cremona.py (17/40) (at least to start with!).

... have a look at #9223, I have just posted a patch that brings the
coverage of databases/cremona.py to 34/40.


Best,
Alex


--
Alex Ghitza -- http://aghitza.org/
Lecturer in Mathematics -- The University of Melbourne -- Australia

Message has been deleted

Simon King

unread,
Jun 15, 2010, 4:02:56 AM6/15/10
to sage-devel
Hi all!

On Jun 12, 2:38 pm, John Cremona <john.crem...@gmail.com> wrote:
> Is there still a wiki page for people to sign up to deal with one or
> more of these?  Or a standard for trac ticket titles to ensure that
> effort is not duplicated?

This would be good to have.

For the record:
I took care of sage.categories.homset, at #9235 (and before I did, I
did a search for "doc homset").

Cheers,
Simon

Alex Ghitza

unread,
Jun 15, 2010, 5:14:20 AM6/15/10
to Simon King, sage-devel
On Tue, 15 Jun 2010 01:02:56 -0700 (PDT), Simon King <simon...@nuigalway.ie> wrote:
> On Jun 12, 2:38 pm, John Cremona <john.crem...@gmail.com> wrote:
> > Is there still a wiki page for people to sign up to deal with one or
> > more of these?  Or a standard for trac ticket titles to ensure that
> > effort is not duplicated?
>
> This would be good to have.

See

http://wiki.sagemath.org/doc5

for a very basic page listing what my detective skills found on trac.
Since my mindreader skills are very limited, please add any modules that
you are working on. It's really a pain if you get scooped ;)

daveloeffler

unread,
Jun 15, 2010, 6:56:14 AM6/15/10
to sage-devel
(I'm working on a couple of tickets but I can't remember my Sage wiki
account password -- can someone with admin rights reset it for me?)

On Jun 15, 10:14 am, Alex Ghitza <aghi...@gmail.com> wrote:
> On Tue, 15 Jun 2010 01:02:56 -0700 (PDT), Simon King <simon.k...@nuigalway.ie> wrote:
> > On Jun 12, 2:38 pm, John Cremona <john.crem...@gmail.com> wrote:
> > > Is there still a wiki page for people to sign up to deal with one or
> > > more of these?  Or a standard for trac ticket titles to ensure that
> > > effort is not duplicated?
>
> > This would be good to have.
>
> See
>
> http://wiki.sagemath.org/doc5
>
> for a very basic page listing what my detective skills found on trac.
> Since my mindreader skills are very limited, please add any modules that
> you are working on.  It's really a pain if you get scooped ;)
>
> Best,
> Alex
>
> --
> Alex Ghitza --http://aghitza.org/

William Stein

unread,
Jun 19, 2010, 12:10:56 PM6/19/10
to sage-...@googlegroups.com
On Tue, Jun 15, 2010 at 3:56 AM, daveloeffler <dave.l...@gmail.com> wrote:
> (I'm working on a couple of tickets but I can't remember my Sage wiki
> account password -- can someone with admin rights reset it for me?)

As far as I know, nobody knows how to reset Sage wiki passwords.
Just make a new account with a slightly different username, and write
down your password somewhere.

-- William

>
> On Jun 15, 10:14 am, Alex Ghitza <aghi...@gmail.com> wrote:
>> On Tue, 15 Jun 2010 01:02:56 -0700 (PDT), Simon King <simon.k...@nuigalway.ie> wrote:
>> > On Jun 12, 2:38 pm, John Cremona <john.crem...@gmail.com> wrote:
>> > > Is there still a wiki page for people to sign up to deal with one or
>> > > more of these?  Or a standard for trac ticket titles to ensure that
>> > > effort is not duplicated?
>>
>> > This would be good to have.
>>
>> See
>>
>> http://wiki.sagemath.org/doc5
>>
>> for a very basic page listing what my detective skills found on trac.
>> Since my mindreader skills are very limited, please add any modules that
>> you are working on.  It's really a pain if you get scooped ;)
>>
>> Best,
>> Alex
>>
>> --
>> Alex Ghitza --http://aghitza.org/
>> Lecturer in Mathematics -- The University of Melbourne -- Australia
>

> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to sage-devel+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org
>

--
William Stein
Professor of Mathematics
University of Washington
http://wstein.org

Robert Bradshaw

unread,
Jun 21, 2010, 1:58:41 PM6/21/10
to sage-...@googlegroups.com

On Jun 11, 2010, at 2:42 AM, Minh Nguyen wrote:

> Hi Florent,
>
> On Fri, Jun 11, 2010 at 7:33 PM, Florent Hivert
> <florent...@univ-rouen.fr> wrote:
>
> <SNIP>
>


>> I like this way of seeing. However, I was speaking about module or
>> functions
>> which are not tested nor deprecated and nowhere used into sage
>> (easy to check
>> using grep). Does it make sens to remove them without a deprecation
>> warning ?
>> Many code seems to had been put here, just in case it is useful,
>> and was never
>> used by the sage lib itself, but maybe by some users...
>>
>> Do we agree on the policy:
>>
>> - If a user need a code, he should take care to document and test it.
>> - Corollary: any code which is not tested, nor used can be safely
>> removed
>> without a deprecation warning.
>

> There are functions, classes, methods that were introduced into the
> Sage library well before the policy of 100% doctest coverage was
> implemented and os completely lack testing. It can be difficult to
> know if a piece of orphaned code should be removed. I think we need to
> consider your proposed policy on a module by module basis.

+1. Much of that code has been around for years, and so is the least
safe to deprecate without warning. Of course, there's a lot of dead
code that could be pruned/cleaned, but lets put at least some
deprecation warnings in sooner rather than later.

Perhaps, if your thinking about coverage, it would be fair to not
count deprecated code in that number.

- Robert

Reply all
Reply to author
Forward
0 new messages