Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sphinx Doctest: test the code without comparing the output.

909 views
Skip to first unread message

Luca Cerone

unread,
Sep 21, 2013, 6:47:26 AM9/21/13
to
Dear all,
I am writing the documentation for a Python package using Sphinx.

I have a problem when using doctest blocks in the documentation:
I couldn't manage to get doctest to run a command but completely ignoring
the output.

For example, how can I get a doctest like the following to run correctly?

.. doctest:: example_1

>>> import random
>>> x = random.uniform(0,100)
>>> print str(x)
#some directive here to completely ignore the output

Since I don't know the value of `x`, ideally in this doctest I only want
to test that the various commands are correct, regardless of
the output produced.

I have tried using the ELLIPSIS directive, but the problem is that the `...`
are interpreted as line continuation rather than `any text`:

.. doctest:: example_2

>>> import random
>>> x = random.uniform(0,100)
>>> print str(x) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
...

I don't know if there is a way to make Sphinx understand that I want to ignore the whole output. I think the easiest way to solve this, would be differentiating between the ellipsis sequence and the line continuation sequence, but I don't know how to do that.

I know that I could skip the execution of print(str(x)) but this is not what I want; I really would like the command to be executed the output ignored.
Can you point me to any solution for this issue?

Thanks a lot in advance for your help,
Cheers,
Luca

Steven D'Aprano

unread,
Sep 21, 2013, 8:27:31 AM9/21/13
to
On Sat, 21 Sep 2013 03:47:26 -0700, Luca Cerone wrote:

> Dear all,
> I am writing the documentation for a Python package using Sphinx.
>
> I have a problem when using doctest blocks in the documentation: I
> couldn't manage to get doctest to run a command but completely ignoring
> the output.
>
> For example, how can I get a doctest like the following to run
> correctly?
>
> .. doctest:: example_1
>
> >>> import random
> >>> x = random.uniform(0,100)
> >>> print str(x)
> #some directive here to completely ignore the output

The Fine Manual says that the directive you want is #doctest:+SKIP.

http://docs.python.org/2/library/doctest.html#doctest.SKIP

Although it's not explicitly listed as a directive, every option flag
corresponds to a directive. So your example becomes:

>>> import random
>>> x = random.uniform(0,100)
>>> print x #doctest:+SKIP
42.012345678901234


(There's no need to convert things to str before printing them.)



--
Steven

Luca Cerone

unread,
Sep 21, 2013, 8:44:09 AM9/21/13
to
Dear Steven,
thanks for the help.

I am aware that I might have used the SKIP directive (as I hinted in my mail).
Even if the fine manual suggests to do so I don't agree with it, though.
The reason is simple: SKIP as the name suggests causes the code not to be run at all, it doesn't ignore the output. If you use a SKIP directive on code that contains a typo, or maybe you changed
the name of a keyword to make it more meaningful and forgot to update your docstring, then the error won't be caught.

For example:

.. doctest:: example

>>> printt "Hello, World!" # doctest: +SKIP
"Hello, World!"

would pass the test. Since I am writing a tutorial for people that have even less experience than me with Python, I want be sure that the code in my examples runs just fine.

>
> (There's no need to convert things to str before printing them.)
>

You are right, I modified an example that uses x in one of my functions that requires a string in input, and didn't change that.

Thanks again for the help anyway,

Cheers,
Luca

Steven D'Aprano

unread,
Sep 21, 2013, 11:48:34 AM9/21/13
to
On Sat, 21 Sep 2013 05:44:09 -0700, Luca Cerone wrote:

> If you use a SKIP
> directive on code that contains a typo, or maybe you changed the name of
> a keyword to make it more meaningful and forgot to update your
> docstring, then the error won't be caught.

And if you ignore the output, the error won't be caught either. What's
the difference?


>>> 1 + 1 #doctest:+IGNORE_OUTPUT (not a real directive)
1000

Since doctest is ignoring the output, it won't catch the failure. Even if
your code raises an exception, if you ignore the output, you'll never see
the exception because it is ignored.

So you simply can't do what you want. You can't both ignore the output of
a doctest and have doctest report if the test fails.


--
Steven

Luca Cerone

unread,
Sep 21, 2013, 12:25:26 PM9/21/13
to
> And if you ignore the output, the error won't be caught either. What's
>
> the difference?
>
> >>> 1 + 1 #doctest:+IGNORE_OUTPUT (not a real directive)
>
> 1000
>
>

The difference is that in that case you want to check whether the result is correct or not, because you expect a certain result.

In my case, I don't know what the output is, nor care for the purpose of the tutorial. What I care is being sure that the command in the tutorial is correct, and up to date with the code.

If you try the following, the test will fail (because there is a typo in the code)

.. doctest:: example

>>> printt "hello, world"

and not because the output doesn't match what you expected.

Even if the command is correct:

.. doctest:: example_2

>>> print "hello, world"

this text will fail because doctest expects an output. I want to be able to test that the syntax is correct, the command can be run, and ignore whatever the output is.

Don't think about the specific print example, I use this just to show what the problem is, which is not what I am writing a tutorial about!

>
>
> So you simply can't do what you want. You can't both ignore the output of
>
> a doctest and have doctest report if the test fails.
>

OK, maybe it is not possible using doctest. Is there any other way to do what I want? For example using an other Sphinx extension??

Chris Angelico

unread,
Sep 21, 2013, 9:33:24 PM9/21/13
to pytho...@python.org
On Sun, Sep 22, 2013 at 2:25 AM, Luca Cerone <luca....@gmail.com> wrote:
> The difference is that in that case you want to check whether the result is correct or not, because you expect a certain result.
>
> In my case, I don't know what the output is, nor care for the purpose of the tutorial. What I care is being sure that the command in the tutorial is correct, and up to date with the code.

I'd call that a smoke-test, rather than something a test
harness/engine should be doing normally. All you do is see if the
program crashes. This can be extremely useful (I smoke-test my scripts
as part of my one-key "deploy to testbox" script, saving me the
trouble of actually running anything - simple syntactic errors or
misspelled function/variable names (in languages where that's a
concept) get caught really early); but if you're using this for a
tutorial, you risk creating a breed of novice programmers who believe
their first priority is to stop the program crashing. Smoke testing is
a tool that should be used by the expert, NOT a sole check given to a
novice.

ChrisA

Steven D'Aprano

unread,
Sep 21, 2013, 10:38:49 PM9/21/13
to
On Sat, 21 Sep 2013 09:25:26 -0700, Luca Cerone wrote:

>> And if you ignore the output, the error won't be caught either. What's
>> the difference?
>>
>> >>> 1 + 1 #doctest:+IGNORE_OUTPUT (not a real directive)
>> 1000
>>
>>
> The difference is that in that case you want to check whether the result
> is correct or not, because you expect a certain result.
>
> In my case, I don't know what the output is, nor care for the purpose of
> the tutorial. What I care is being sure that the command in the tutorial
> is correct, and up to date with the code.
>
> If you try the following, the test will fail (because there is a typo in
> the code)
>
> .. doctest:: example
>
> >>> printt "hello, world"
>
> and not because the output doesn't match what you expected.


That is not how doctest works. That test fails because its output is:

SyntaxError: invalid syntax


not because doctest recognises the syntax error as a failure. Exceptions,
whether they are syntax errors or some other exception, can count as
doctest *successes* rather than failures. Both of these count as passing
doctests:


>>> x++ # Python is not C!
Traceback (most recent call last):
..
SyntaxError: unexpected EOF while parsing

>>> 1/0
Traceback (most recent call last):
..
ZeroDivisionError: division by zero


Consequently, doctest only recognises failures by their output,
regardless of whether that output is a value or an exception.


> Even if the command is correct:
>
> .. doctest:: example_2
>
> >>> print "hello, world"
>
> this text will fail because doctest expects an output. I want to be able
> to test that the syntax is correct, the command can be run, and ignore
> whatever the output is.

The only wild-card output that doctest recognises is ellipsis, and like
all wild-cards, can match too much if you aren't careful. If ellipsis is
not matching exactly what you want, add some scaffolding to ensure a
match:

>>> import random
>>> x = random.uniform(0,100)
>>> print "x =", x # doctest:+ELLIPSIS
x = ...


will work. But a better solution, I think, would be to pick a
deterministic result:

>>> import random
>>> random.seed(100)
>>> random.uniform(0, 100)
14.566925510413032


Alas, that only works reliably if you stick to a single Python version.
Although the results of calling random.random are guaranteed to be stable
across versions, random functions build on top of random.random like
uniform are not and may need to be protected with a version check.



--
Steven

Luca Cerone

unread,
Sep 22, 2013, 12:09:22 AM9/22/13
to
> but if you're using this for a
>
> tutorial, you risk creating a breed of novice programmers who believe
>
> their first priority is to stop the program crashing. Smoke testing is

Hi Chris,
actually my priority is to check that the code is correct. I changed the syntax
during the development, and I want to be sure that my tutorial is up to date.

The user will only see the examples that, after testing with doctest, will
run. They won't know that I used doctests for the documentation..

How can I do what you call smoke tests in my Sphinx documentation?

Luca Cerone

unread,
Sep 22, 2013, 12:15:48 AM9/22/13
to
>
> That is not how doctest works. That test fails because its output is:

ok.. is there a tool by which I can test if my code runs regardless the output?

>
> The only wild-card output that doctest recognises is ellipsis, and like
>
> all wild-cards, can match too much if you aren't careful. If ellipsis is
>

actually I want to match the whole output.. and you can't because ellipsis is the same as line continuation...
>
>
>
> will work. But a better solution, I think, would be to pick a

I think you are sticking too much to the examples I posted, where I used functions that are part of Python, so that everybody could run the code and test the issues.

I don't use random numbers, so I can't apply what you said.
Really, I am looking for a way to test the code while ignoring the output.

I don't know if usually it is a bad choice, but in my case is what I want/need.

Thanks for the help,
Luca

Chris Angelico

unread,
Sep 22, 2013, 12:25:36 AM9/22/13
to pytho...@python.org
I don't know Sphinx, so I can't help there. But maybe you should just
have a pile of .py files, and you import each one and see if you get
an exception?

ChrisA

Steven D'Aprano

unread,
Sep 22, 2013, 8:12:51 AM9/22/13
to
On Sat, 21 Sep 2013 21:15:48 -0700, Luca Cerone wrote:

> I am looking for a way to test the code while ignoring the output.

This makes no sense. If you ignore the output, the code could do ANYTHING
and the test would still pass. Raise an exception? Pass. SyntaxError?
Pass. Print "99 bottles of beer"? Pass.

I have sometimes written unit tests that just check whether a function
actually is callable:

ignore = function(a, b, c)

but I've come to the conclusion that is just a waste of time, since there
are dozens of other tests that will fail if function isn't callable. But
if you insist, you could always use that technique in your doctests:

>>> ignore = function(a, b, c)

If the function call raises, your doctest will fail, but if it returns
something, anything, it will pass.


--
Steven

Ned Batchelder

unread,
Sep 22, 2013, 9:39:07 AM9/22/13
to Luca Cerone, pytho...@python.org
On 9/22/13 12:09 AM, Luca Cerone wrote:
> Hi Chris,
> actually my priority is to check that the code is correct. I changed the syntax
> during the development, and I want to be sure that my tutorial is up to date.
>

If you do manage to ignore the output, how will you know that the syntax
is correct? The output for an incorrect syntax line will be an
exception, which you'll ignore. Maybe I don't know enough about the
details of doctest. It's always seemed incredibly limited to me.
Essentially, it's as if you used unittest but the only assertion you're
allowed to make is self.assertEqual(str(X), "....")

--Ned.

Luca Cerone

unread,
Sep 22, 2013, 10:24:49 AM9/22/13
to
> This makes no sense. If you ignore the output, the code could do ANYTHING
>
> and the test would still pass. Raise an exception? Pass. SyntaxError?
>
> Pass. Print "99 bottles of beer"? Pass.
>

if you try the commands, you can see that the tests fail..
for example

.. doctest::

>>> raise Exception("test")

will fail with this message:

File "utils.rst", line 5, in default
Failed example:
raise Exception("test")
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python2.7/doctest.py", line 1289, in __run
compileflags, 1) in test.globs
File "<doctest default[0]>", line 1, in <module>
raise Exception("test")
Exception: test

So to me this seems OK.. "Print" will fail as well...

>
>
> I have sometimes written unit tests that just check whether a function
>
> actually is callable:
>
>
>
> ignore = function(a, b, c)
>
>
>
> but I've come to the conclusion that is just a waste of time, since there
>
> are dozens of other tests that will fail if function isn't callable. But
>
> if you insist, you could always use that technique in your doctests:
>
>
>
> >>> ignore = function(a, b, c)
>
>
> If the function call raises, your doctest will fail, but if it returns
>
> something, anything, it will pass.
>
>

I understand your point, but now I am not writing unit tests to check the correctness of the code. I am only writing a tutorial and assuming that
the code is correct. What I have to be sure is that the code in the tutorial
can be executed correctly, and some commands print verbose output which can change.

It is not enough to write >>> ignore = function(a,b,c) won't work because the function still prints messages on screen and this causes the failure of the test...

Luca Cerone

unread,
Sep 22, 2013, 10:26:41 AM9/22/13
to
On Sunday, 22 September 2013 14:39:07 UTC+1, Ned Batchelder wrote:
> On 9/22/13 12:09 AM, Luca Cerone wrote:
>
> > Hi Chris,
>
> > actually my priority is to check that the code is correct. I changed the syntax
>
> > during the development, and I want to be sure that my tutorial is up to date.
>
> >
>
>
>
> If you do manage to ignore the output, how will you know that the syntax
>
> is correct? The output for an incorrect syntax line will be an
>
> exception, which you'll ignore.

if the function raises an exception, the test fails, regardless of the output

> Maybe I don't know enough about the
>
> details of doctest. It's always seemed incredibly limited to me.

I agree that has some limitations

>
> Essentially, it's as if you used unittest but the only assertion you're
>
> allowed to make is self.assertEqual(str(X), "....")
>
>

I don't know unittest, is it possible to use it within Sphinx?

>
> --Ned.

Steven D'Aprano

unread,
Sep 22, 2013, 8:36:38 PM9/22/13
to
More or less :-)

Doc tests really are documentation first and tests second. That's its
strength. If you want unit tests, you know where to find them :-)


--
Steven

Neil Cerutti

unread,
Sep 23, 2013, 9:42:13 AM9/23/13
to
On 2013-09-22, Luca Cerone <luca....@gmail.com> wrote:
> I understand your point, but now I am not writing unit tests to
> check the correctness of the code. I am only writing a tutorial
> and assuming that the code is correct. What I have to be sure
> is that the code in the tutorial can be executed correctly, and
> some commands print verbose output which can change.
>
> It is not enough to write >>> ignore = function(a,b,c) won't
> work because the function still prints messages on screen and
> this causes the failure of the test...

It won't be very good documenation any more but nothing stops you
from examining the result in the next doctest and making yourself
happy about it.

>>> x = input("indeterminate:")
>>> result = "'{}'".format(x))
>>> result.startswith("'") and result.endswith("'")
True

--
Neil Cerutti

Luca Cerone

unread,
Sep 23, 2013, 10:45:43 AM9/23/13
to
> It won't be very good documenation any more but nothing stops you
>
> from examining the result in the next doctest and making yourself
>
> happy about it.
>
>
>
> >>> x = input("indeterminate:")
>
> >>> result = "'{}'".format(x))
>
> >>> result.startswith("'") and result.endswith("'")
>
> True
>

Hi Neil, thanks for the hint, but this won't work.

The problem is that the function displays some output informing you of what steps are being performed (some of which are displayed by a 3rd party function that I don't control).

This output "interferes" with the output that should be checked by doctest.

For example, you can check that the following doctest would fail:

.. doctest:: example_fake

>>> def myfun(x,verbose):
... print "random output"
... return x
>>> myfun(10)
10

When you run make doctest the test fails with this message:

File "tutorial.rst", line 11, in example_fake
Failed example:
myfun(10)
Expected:
10
Got:
random output
10

In this case (imagine that "random output" is really random, therefore I can not easily filter it, if not ignoring several lines. This would be quite easy if ellipsis and line continuation wouldn't have the same sequence of characters, but unfortunately this is not the case.

The method you proposed still is not applicable, because I have no way to use startswith() and endswith()...

The following code could do what I want if I could ignore the output...

>>> def myfun(x,verbose):
... print "random output"
... return x
>>> result = myfun(10) #should ignore the output here!
>>> print result
10

fails with this message:

File "tutorial.rst", line 11, in example_fake
Failed example:
result = myfun(10)
Expected nothing
Got:
random output

(line 11 contains: >>> result = myfun(10))

A SKIP directive is not feasible either:

.. doctest:: example_fake

>>> def myfun(x):
... print "random output"
... return x
>>> result = myfun(10) # doctest: +SKIP
>>> result
10

fails with this error message:
File "tutorial.rst", line 12, in example_fake
Failed example:
result
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python2.7/doctest.py", line 1289, in __run
compileflags, 1) in test.globs
File "<doctest example_fake[2]>", line 1, in <module>
result
NameError: name 'result' is not defined

As you can see is not that I want something too weird, is just that sometimes you can't control what the function display and ignoring the output is a reasonable way to implement a doctest.

Hope these examples helped to understand better what my problem is.

Thanks all of you guys for the hints, suggestions and best practices :)

Luca Cerone

unread,
Sep 23, 2013, 10:51:30 AM9/23/13
to
I don't know why but it seems that google groups stripped the indentation from the code. I just wanted to ensure you that in the examples that I have run
the definition of myfunc contained correctly indented code!

Skip Montanaro

unread,
Sep 23, 2013, 11:14:48 AM9/23/13
to Luca Cerone, Python
> I don't know why but it seems that google groups stripped the indentation from the code.

Because it's Google Groups. :-)

800-pound gorillas tend to do pretty much whatever they want.

Skip

Neil Cerutti

unread,
Sep 23, 2013, 12:44:22 PM9/23/13
to
Perhaps try the "advanced API" and define your oen OutputChecker
to add the feature that you need.

Figuring out how to best invoke doctest with your modified
OutputChecker will take some digging in the source, probably
looking at doctest.testmod. I don't see an example in the docs.

> Hope these examples helped to understand better what my problem
> is.

Yes, I think it's well-defined now.

--
Neil Cerutti

Neil Cerutti

unread,
Sep 23, 2013, 12:53:53 PM9/23/13
to
On 2013-09-23, Neil Cerutti <ne...@norwich.edu> wrote:
> Perhaps try the "advanced API" and define your oen
> OutputChecker to add the feature that you need.
>
> Figuring out how to best invoke doctest with your modified
> OutputChecker will take some digging in the source, probably
> looking at doctest.testmod. I don't see an example in the docs.

The docstring for doctest.DocTestRunner contains the example code
I was looking for.

--
Neil Cerutti

Luca Cerone

unread,
Sep 23, 2013, 2:54:10 PM9/23/13
to
>
> The docstring for doctest.DocTestRunner contains the example code
>
> I was looking for.
>
>
Thanks, I will give it a try!
>
> --
>
> Neil Cerutti

0 new messages