python -m unittest test_module1 test_module2 python -m unittest test_module.TestClass python -m unittest test_module.TestClass.test_method
import os
g.cls()
leo_editor_dir = g.os_path_finalize_join(g.app.loadDir, '..', '..')
os.chdir(leo_editor_dir)
commands = r'python -m unittest leo.core.leoAst.TestTOG'
g.execute_shell_commands(commands, trace=False)
For a long time I've been feeling that Leo unit tests don't prove anything. They usually don't exercise real Leo code at all or if they do, they exercise just a small portion of it. So, the fact that unit tests are passing doesn't mean Leo would work properly for real users.
It might take a huge effort to fully eliminate all `if g.unitTesting` conditionals from Leo core, but it might be worth doing.
git origin add btheado https://github.com/btheado/leo-editor.gitgit checkout btheado pytest-experimentpip install pytestpytest leo/test/pytest
pip install pytest-covpytest --cov-report html --cov-report term-missing --cov=leo.core.leoNodes leo/test/pytest/leoNodes_test.pyfirefox htmlcov/leo_core_leoNodes_py.html ;# Or whatever your web browser is
pytest leo/test/pytest --runxfail
--
You received this message because you are subscribed to the Google Groups "leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email to leo-editor+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/leo-editor/dd029cac-6860-4f8f-a565-7a3083f16120%40googlegroups.com.
I've been experimenting lately writing pytest tests for leo. I just published my work at https://github.com/btheado/leo-editor/tree/pytest-experiment.
You should be able try it out with these commands (untested):git origin add btheado https://github.com/btheado/leo-editor.gitgit checkout btheado pytest-experimentpip install pytestpytest leo/test/pytest
On Saturday, December 28, 2019 at 10:44:39 AM UTC-5, btheado wrote:I've been experimenting lately writing pytest tests for leo. I just published my work at https://github.com/btheado/leo-editor/tree/pytest-experiment.You should be able try it out with these commands (untested):git origin add btheado https://github.com/btheado/leo-editor.gitgit checkout btheado pytest-experimentpip install pytestpytest leo/test/pytestThe first line didn't work. I did `git clone https://github.com/btheado/leo-editor.git brian`, but I don't see the pytest folder in the leo/test folder, which is strange.
With my present setup, `pytest leo\core\leoAst.py` executes all the present unit tests in leoAst.py:leo\core\leoAst.py .......................................................................
========================== 71 passed in 5.42 seconds ==========================So I don't understand what difference pytest makes. What am I missing?
pytest
framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries."pip install pytest-covpytest --cov-report html --cov-report term-missing --cov=leo.core.leoAst leo/core/leoAst.pyfirefox htmlcov/leo_core_leoAst_py.html
Btw, giving a directory does not work (pytest 3.6.7). For example, pytest leo\core finds no tests.
test_*.py
or *_test.py
filestest_*.py
or *_test.py
files.> You might need to introduce failed tests in order to experience the better assert failure reporting?Leo's existing unit tests use asserts. It's no big deal.
I was looking at the tests in leoAst.py in the fstringify branch and I don't find any asserts in the tests themselves.
On Sun, Dec 29, 2019 at 5:29 PM Brian Theado <brian....@gmail.com> wrote:
> You might also find the code coverage report useful:Yes, that's interesting. The TOG classes remember which visitors have been visited, so that's probably enough.
pip install pytest-covpytest --cov-report html --cov-report term-missing --cov=leo.core.leoAst leo/core/leoAst.py
firefox htmlcov/leo_core_leoAst_py.html ;# Or whatever your web browser is
sync_newlinedo_AsyncFunctionDefdo_Interactivedo_Expressiondo_Constantdo_ExtSlicedo_Setdo_SetComp
On Sunday, December 29, 2019 at 7:19:03 PM UTC-5, btheado wrote:I was looking at the tests in leoAst.py in the fstringify branch and I don't find any asserts in the tests themselves.The tests in the TestTOG are actually extremely strong. I have just added the following to the docstring for the TestTOG class:QQQThese tests call BaseTest.make_data, which creates the two-way links between tokens and the parse tree.
The asserts in tog.sync_tokens suffice to create strong unit tests.QQQ
def test_attribute(self):contents = r"""\open(os.devnull, "w")"""self.make_data(contents)
next(TokenOrderGenerator.fromString(r"""open(os.devnull, "w")""")) == ???
or
list(TokenOrderGenerator.fromString(r"""open(os.devnull, "w")""")) == [???, ???, ???]
self.make_data(contents)
contents, tokens, tree = self.make_data(contents)
dump_tree(tree)
Tree...
parent lines node tokens
====== ===== ==== ======
6..7 0.Module: newline.33(6:0)
0.Module 1.Expr:
1.Expr 1 2.Str: s='ds 1' string.1("""ds 1""")
0.Module 1..2 3.ClassDef: newline.2(1:11) name.3(class) name.5(TestClass) op.6=:
3.ClassDef 4.Expr:
4.Expr 2..3 5.Str: s='ds 2' newline.7(2:17) string.9("""ds 2""")
3.ClassDef 3..4 6.FunctionDef: newline.10(3:15) name.12(def) name.14(long_name) op.23=:
6.FunctionDef 4 7.arguments: op.20==
7.arguments 4 8.arg: name.16(a)
7.arguments 4 9.arg: name.19(b)
7.arguments 4 10.Num: n=2 number.21(2)
6.FunctionDef 11.Expr:
11.Expr 4..5 12.Str: s='ds 3' newline.24(4:27) string.26("""ds 3""")
6.FunctionDef 13.Expr:
13.Expr 14.Call:
14.Call 5..6 15.Name: id='print' newline.27(5:19) name.29(print)
14.Call 6 16.Str: s='done' string.31('done')
On Wed, Jan 8, 2020 at 5:55 AM Brian Theado <brian....@gmail.com> wrote:> I often read unit tests in source code projects in the hope of finding simple, concrete usage examples. These examples not only serve to test the code, but also also documentation on how to use it.> But from that I don't see what the outputs are. It doesn't show me what a TOG object can do and what it is for.For an overview, see the Theory of Operation in LeoDocs.leo.
> Is there some code you can add which make it easy to see how things work?Good question. The dozen or so tests in the TestFstringify class assert that a small example produces the desired result.
On Tue, Jan 7, 2020 at 1:24 PM Brian Theado <brian....@gmail.com> wrote:> I still think full line-by-line coverage analysis will be very valuable.I agree. I have just created #1474 for this.
> I still think full line-by-line coverage analysis will be very valuable.I agree. I have just created #1474 for this.Many thanks for suggesting this. `pytest --cov` has quickly become one of my favorite tools, on a par with pyflakes and pylint.
pytest is easy to configure, and it's easy to suppress coverage test within the code itself. leo-editor/.coveragerc now contains the default settings.
On Sat, Jan 11, 2020 at 2:11 PM Brian Theado <brian....@gmail.com> wrote:> I have doubts about the following entries you are suppressing: assert, except, raise.Imo, they are fine. Assert signal that something is seriously wrong with the tool, not the user input.
if e is not None: assert e.lower() == 'utf-8', repr(e)
On Sat, Jan 11, 2020 at 2:11 PM Brian Theado <brian....@gmail.com> wrote:> I have doubts about the following entries you are suppressing: assert, except, raise.Imo, they are fine. Assert signal that something is seriously wrong with the tool, not the user input.