Bob H and others: Discuss: retire running unit tests externally?

57 views
Skip to first unread message

Edward K. Ream

unread,
Dec 22, 2019, 6:26:34 AM12/22/19
to leo-editor
This is issue #1463.

I never run unit tests externally.  However, they might be useful when run in the bridge.

What say you?

Edward

vitalije

unread,
Dec 22, 2019, 8:49:06 PM12/22/19
to leo-editor
I often run tests externally. Most of the time they are not really testing anything. They are just the way I can run externally python scripts which can have access to a lot of stuff without any import, and also can have some access to structured data. The same script would be much larger if used outside of Leo and also would require to be written in some file.

It is also a must if one wishes to use leoBridge inside the test script.

Vitalije

Matt Wilkie

unread,
Dec 22, 2019, 11:03:44 PM12/22/19
to leo-editor
Travis or any other continual integration tests need to be run externally don't they? Or does "run externally" mean something different than `python run_travis_unit_tests.py`?

-matt

Edward K. Ream

unread,
Dec 23, 2019, 10:55:08 AM12/23/19
to leo-editor
On Sun, Dec 22, 2019 at 8:49 PM vitalije <vita...@gmail.com> wrote:
I often run tests externally. Most of the time they are not really testing anything. They are just the way I can run externally python scripts which can have access to a lot of stuff without any import, and also can have some access to structured data. The same script would be much larger if used outside of Leo and also would require to be written in some file.

It is also a must if one wishes to use leoBridge inside the test script.

Thanks for these comments.  I've closed the issue and marked it "Won'tDo".

Edward

Edward K. Ream

unread,
Dec 24, 2019, 8:10:20 AM12/24/19
to leo-editor
On Sun, Dec 22, 2019 at 11:03 PM Matt Wilkie <map...@gmail.com> wrote:
Travis or any other continual integration tests need to be run externally don't they? Or does "run externally" mean something different than `python run_travis_unit_tests.py`?

Not sure, but this issue is moot.  These commands will not be retired.

Edward

SegundoBob

unread,
Dec 26, 2019, 6:04:17 PM12/26/19
to leo-editor
Edward,

I have never benefited from leo-editor/leo/test/unitTest.leo.  At your request I have run the unit tests one or two times and reported the results, but I have never figured them out enough to benefit from them.  In my opinion, it is best to use unit tests that impose as few levels of cleverness as possible between the tested program and the test.  The unit test directives in Leo-Editor are an unnecessary level of cleverness.  Hence, I don't use them.

Consequently, I have little to say about leo-editor/leo/test/unitTest.leo and probably nothing to say of any value.

But I will note that issue #1350 caused you to write a script to run the unit tests using LeoBridge and this script still works.  I had to modify your script a little to make it work in my installation.  I ran it on my then current version.  Then I ran it on the latest Leo-Editor version.

Run  using 2019-10-06 Version of Leo-Editor
Ran 925 tests in 21.108s

FAILED (failures=1, skipped=50)

Run using 2019-12-24 Version of Leo-Editor
Ran 938 tests in 25.360s

FAILED (errors=1, skipped=53)


Modified Script
#!/usr/bin/python
# -*- encoding: utf-8 -*-

# 2019-09-27 Fri Edward K. Ream Author

# Begin Not Needed and not right for my installation. SegundoBob
# Add leo-editor to sys.path.
##import sys
##leo_path = r'C:\leo.repo\leo-editor'
##if leo_path not in sys.path:
##    sys.path.append(leo_path)
# End Not Needed and not right for my installation. SegundoBob

# Start the bridge.
import leo.core.leoBridge as leoBridge
path = r'/pri/git/leo-editor/leo/test/unitTest.leo'
controller = leoBridge.controller(
    gui='nullGui',
    loadPlugins=False,  # True: attempt to load plugins.
    readSettings=False, # True: read standard settings files.
    silent=False,       # True: don't print signon messages.
    verbose=False,      # True: print informational messages.
    useCaches=False,    # True: read and write caches.
)
# Run the unit tests.
g = controller.globals()
g.app.failFast = False
c = controller.openLeoFile(path)
c.k.simulateCommand('run-all-unit-tests-locally')


Run using 2019-10-06 Version of Leo-Editor
Test System
Leo Log Window
Leo 6.1-devel, devel branch, build da55f3fbad
2019-10-06 16:53:59 -0500
Python 3.6.9, PyQt version 5.9.5
linux


Run
2019-12-26 14:09:37 /pri/git/leo_bug_demos/unit_tests
$ ./*.py
setting leoID from os.getenv('USER'): 'bob'
found  1 doctests for leo.core.leoTest
found  2 doctests for leo.core.leoGlobals

g.app.old_gui_name: 'nullGui', g.app.gui.guiName(): nullGui

......
--global-docks: False

import-jupyter-notebook requires nbformat package
.ss................s........................
End of leoAtFile tests
...s..s.ssss.ss.....................................
End of leoColor tests
...............s......sss..
End of leoCommands tests
.........s..s.
End of leoConfig tests
....ss........................................................................s.........ssssssssss........................................................................s......s.....s.....sss..sss..
End of typing tests
.
End of leoEditCommands tests.
........
End of leoFileCommands tests.
.s.....s.s.....
End of leoFind tests.
.....s.
End of leoFrame tests.
...................................................................
End of leoGlobals tests.
.import-jupyter-notebook requires nbformat package
...............s..................................................................................................................................
End of leoImport tests.
.......
End of leoKeys tests.
.......................................
End of leoNodes tests.
....................ss..s..ss...Unexpected docutils exception
Traceback (most recent call last):

  File "/pri/git/leo-editor/leo/core/leoRst.py", line 1630, in writeToDocutils
    settings_overrides=overrides)

  File "/usr/lib/python3/dist-packages/docutils/core.py", line 416, in publish_string
    enable_exit_status=enable_exit_status)

  File "/usr/lib/python3/dist-packages/docutils/core.py", line 664, in publish_programmatically
    output = pub.publish(enable_exit_status=enable_exit_status)

  File "/usr/lib/python3/dist-packages/docutils/core.py", line 219, in publish
    output = self.writer.write(self.document, self.destination)

  File "/usr/lib/python3/dist-packages/docutils/writers/__init__.py", line 80, in write
    self.translate()

  File "/pri/git/leo-editor/leo/plugins/leo_pdf.py", line 626, in translate
    self.output = self.createPDF_usingPlatypus(story)

  File "/pri/git/leo-editor/leo/plugins/leo_pdf.py", line 598, in createPDF_usingPlatypus
    doc.build(story)

  File "/usr/lib/python3/dist-packages/reportlab/platypus/doctemplate.py", line 1213, in build
    BaseDocTemplate.build(self,flowables, canvasmaker=canvasmaker)

  File "/usr/lib/python3/dist-packages/reportlab/platypus/doctemplate.py", line 995, in build
    self._endBuild()

  File "/usr/lib/python3/dist-packages/reportlab/platypus/doctemplate.py", line 930, in _endBuild
    if getattr(self,'_doSave',1): self.canv.save()

  File "/usr/lib/python3/dist-packages/reportlab/pdfgen/canvas.py", line 1237, in save
    self._doc.SaveToFile(self._filename, self)

  File "/usr/lib/python3/dist-packages/reportlab/pdfbase/pdfdoc.py", line 224, in SaveToFile
    f.write(data)

TypeError: string argument expected, got 'bytes'

F.....................................................................
End of leoUndo tests.
..........................................................................................................................................................................
End of plugins unit tests
............s.....
all unit tests done
..
======================================================================
FAIL: runTest (leo.core.leoTest.GeneralTestCase)
@test c.rstCommands.writeToDocutils: pdf

----------------------------------------------------------------------
Traceback (most recent call last):
  File "/pri/git/leo-editor/leo/core/leoTest.py", line 193, in runTest
    exec(compile(script, scriptFile, 'exec'), d)
  File "/home/bob/.leo/scriptFile.py", line 23, in <module>
    assert result,result
AssertionError: None

----------------------------------------------------------------------
Ran 925 tests in 21.108s

FAILED (failures=1, skipped=50)
2019-12-26 14:11:42 /pri/git/leo_bug_demos/unit_tests

Run using 2019-12-24 Version of Leo-Editor
Test System
Leo 6.2-b1-devel, devel branch, build d7fb550b80
2019-12-24 13:56:02 -0500
Python 3.6.9, PyQt version 5.9.5
linux

Run
2019-12-26 14:23:37 /pri/git/leo_bug_demos/unit_tests
$ ./unit_tests_bridge.py
setting leoID from os.getenv('USER'): 'bob'
found  1 doctests for leo.core.leoTest
found  2 doctests for leo.core.leoGlobals

g.app.old_gui_name: 'nullGui', g.app.gui.guiName(): nullGui

......
--global-docks: False

import-jupyter-notebook requires nbformat package
.ss.............
Running *all* unit tests...

.....sss........................
End of leoAtFile tests
....s..s....ssss.ss.....................................
End of leoColor tests
...............ss.....sss..
End of leoCommands tests
.........s..s.
End of leoConfig tests
....ss........................................................................s.........ssssssssss........................................................................s......s.....s.....sss..sss..
End of typing tests
.
End of leoEditCommands tests.
........
End of leoFileCommands tests.
.s.....s.s.....
End of leoFind tests.
.....s.
End of leoFrame tests.
.................................................................
End of leoGlobals tests.
.import-jupyter-notebook requires nbformat package
...............s..................................................................................................................................
End of leoImport tests.
.......
End of leoKeys tests.
.......................................
End of leoNodes tests.
....................ss..s..ss...E.....................................................................
End of leoUndo tests.
.........................................................................................................................................................................
End of plugins unit tests
............s.............
all unit tests done
..
======================================================================
ERROR: runTest (leo.core.leoTest.GeneralTestCase)
@test c.rstCommands.writeToDocutils: pdf

----------------------------------------------------------------------
Traceback (most recent call last):
  File "/pri/git/leo-editor/leo/core/leoTest.py", line 198, in runTest
    exec(compile(script, scriptFile, 'exec'), d)
  File "/home/bob/.leo/scriptFile.py", line 16, in <module>
    module = g.importFromPath(
AttributeError: module 'leo.core.leoGlobals' has no attribute 'importFromPath'

----------------------------------------------------------------------
Ran 938 tests in 25.360s

FAILED (errors=1, skipped=53)
2019-12-26 14:24:19 /pri/git/leo_bug_demos/unit_tests
$

Respectfully,
SegundoBob

Edward K. Ream

unread,
Dec 27, 2019, 4:28:19 AM12/27/19
to leo-editor
On Thu, Dec 26, 2019 at 6:04 PM SegundoBob <segun...@gmail.com> wrote:


> I have never benefited from leo-editor/leo/test/unitTest.leo... Hence, I don't use them.

Good to know.

Thanks for your detailed report.  As noted elsewhere, this entire question is moot.  The ability to run unit tests externally will remain.

> In my opinion, it is best to use unit tests that impose as few levels of cleverness as possible between the tested program and the test.  The unit test directives in Leo-Editor are an unnecessary level of cleverness. 

I have several responses:

1. Leo's own unit tests might be called a special case.  Many unit tests benefit from being able to store outline data in children of @test nodes.

2. I am about to convert the unit tests for leoAst.py into "traditional" unit tests within leoAst.py.  This will allow people to use leoAst.py separately from Leo.  It will be interesting to compare the two approaches.

3. It's been a long time since the behind-the-scenes machinery involved in running @test nodes has caused any problems for. The convenience in avoiding setup(), teardown() and all the rest is considerable. Binding "self" to the behind-the-scenes instance of unittest.TestCase allows full flexibility when needed.

Edward
Reply all
Reply to author
Forward
0 new messages