New issue report by ondrej.certik:
And enable them with some option.
Issue attributes:
Status: Accepted
Owner: ----
Labels: Type-Defect Priority-Medium
--
You received this message because you are listed in the owner
or CC fields of this issue, or because you starred this issue.
You may adjust your issue notification preferences at:
http://code.google.com/hosting/settings
Comment #1 by Vinzent.Steinberg:
Don't try xfailed tests by default? -1
Comment #2 by lance.c.larsen:
I suggested this idea (an Ondrej was kind enough to add the suggestion
here) because
I am a new developer to sympy and the xfail seemed confusing. I made some
changes to
sympy and ran the test cases. Several seemed to fail (xfail), so I assumed
that my
changes caused the problem. I ran the test cases against a fresh sympy 0.6.2
installation and still had xfailed cases and assumed my python installation
had
problems. I reinstalling python and still had failures. After debugging
enough test
cases and finding comments indicating that a failure was expected and
noticing that
each of these functions was wrapped by @XFAIL, I realized that some
failures were
left in and just earmarked as OK for the time being. Ondrej pointed out
that there is
a page explaining xfail, but when I started to run the test cases, I didn't
know to
look for such a page. I saw several 'f'(s) when I ran py.test, which seemed
to
obviously indicate failures, and a note that there were 31 xfails. To me
that said
things weren't working correctly.
As a new developer it would be useful to not see the xfail cases since they
are
"under review" in a sense and do not indicate failures caused by changes
that you
made. An experienced developer would tend to be aware of how to turn the
XFAIL cases
on (if they are interested in them). I think it would be of benefit to
anyone new to
the project to filter out xfails when you run the test cases. Alternatively
you could
label them in a way such that it is clear that they are failures that have
been
reviewed and accepted for the time being, but I think filtering them out is
cleaner
and there is less of a chance of causing confusion. I think most developers
only care
whether the changes they made caused something to fail that was working
before, so
xfails are irrelevant to them.
Comment #3 by Vinzent.Steinberg:
Well, I understand that this is confusing for new developers. "xfail" is
cryptic,
maybe it should be more verbose ("tests expected to fail"). Same
for "xpass".
I think xfailed tests that pass should always been shown - it's important
to see if a
change caused them to pass. This means that xfailed tests should be run
always. I
think it's ok to filter them out of the output, but they should be run
though.
Comment #4 by ondrej.certik:
Yes, that's ok with me, let's implement it.