ENB: Converting unit tests for @shadow

44 views
Skip to first unread message

Edward K. Ream

unread,
Sep 8, 2021, 10:21:37 AM9/8/21
to leo-editor
The ekr-unit-test branch contains the work for #1766: cover Leo with unit tests. Work is well along, as shown by PR #2159.

In this Engineering Notebook post, I'll say a few words about converting the unit tests for @shadow. These unit tests are arguably the most important because they test the @shadow update algorithm.

The unit tests are also (by far) the most difficult to translate into "flat" (traditional/new) unit tests.  Indeed, the @shadow update algorithm uses an actual Leo outline as part of its data.  So the new unit tests must do likewise.

The ConvertShadowTests.convert_node script is in convertCommands.py. convert_node is part of a larger script that converts @test nodes from unitTest.leo to traditional unit tests. Doing the conversion by hand is out of the question. I must be free to experiment with the contents of each unit test. Altering tests by hand would be tedious and way error-prone.

convert_node creates nodes that pass unit tests. That's not bad, but I suspect that the tests aren't testing what I think they should. The only way to know for sure is to study what the existing unit tests do. I'll tweak convert_node as needed to ensure that the unit tests actually do what the old tests did!

Summary

The old unit tests use actual outline data. The new unit tests will recreate this data.

The old tests were functional tests, not coverage tests. For Leo's @shadow code, functional tests are much more important than coverage tests. However, coverage tests will act as an additional check on the unit tests.

For example, suppose I had naively assumed that all was well just because all the new tests passed. Coverage testing might reveal that all the new tests are actually the same test!

Edward

tbp1...@gmail.com

unread,
Sep 8, 2021, 10:49:18 AM9/8/21
to leo-editor
For converting outlines, maybe some fuzzing tests as well?  Most should fail...

Edward K. Ream

unread,
Sep 8, 2021, 12:15:03 PM9/8/21
to leo-editor
On Wed, Sep 8, 2021 at 9:49 AM tbp1...@gmail.com <tbp1...@gmail.com> wrote:

For converting outlines, maybe some fuzzing tests as well?  Most should fail...

Hehe, I had to google "python fuzzy tests".  Sounds interesting, but I'll leave that for later :-)

Update re the new tests

It looks like the new unit tests are valid. I compared traces in the setup logic in two cases:

1. Running the old unit tests (in devel, from unitTest.leo)
2. Running the new unit tests (in ekr-unit-test, with the new test-shadow command).

The output looks identical. In particular, sentinel lines are as expected. I must only ensure that the following two structure are identical:

1. The structure of the children of each @test node in unitTest.leo.
2. The structure of nodes created by each new unit test.

A test would likely fail if this were not true, but a by-hand check looks advisable.

Edward

tbp1...@gmail.com

unread,
Sep 8, 2021, 12:59:36 PM9/8/21
to leo-editor
If you need to evaluate the structure, maybe an xslt transform would be a good approach.  Or building on that, a schematron test (https://schematron.com/document/2760.html?publicationid=).  If you don't know about Schematron, it uses XPATH expressions to locate parts of the document under test, then tests them for correctness with rules and assertions.  Since Leo outlines are XML files, these may be good approaches.  BTW, Schematron is an international standard,   ISO/IEC 19757-3.

David Szent-Györgyi

unread,
Sep 10, 2021, 9:18:14 AM9/10/21
to leo-editor
For data that is generated by hand or by plugins that are not centrally controlled, and for tests that are difficult to write, would the thoroughness of property-based testing be relevant? I wrote about that some time back. 

Edward K. Ream

unread,
Sep 11, 2021, 4:19:20 PM9/11/21
to leo-editor
On Friday, September 10, 2021 at 8:18:14 AM UTC-5 David Szent-Györgyi wrote:
For data that is generated by hand or by plugins that are not centrally controlled, and for tests that are difficult to write, would the thoroughness of property-based testing be relevant? I wrote about that some time back. 

The short answer is "I don't know" :-)

Edward

Edward K. Ream

unread,
Sep 11, 2021, 8:42:24 PM9/11/21
to leo-editor
On Fri, Sep 10, 2021 at 8:18 AM David Szent-Györgyi <das...@gmail.com> wrote:

For data that is generated by hand or by plugins that are not centrally controlled, and for tests that are difficult to write, would the thoroughness of property-based testing be relevant? I wrote about that some time back. 

Let me give a more direct answer to your question.

The ekr-unit-test branch does not, by itself, make Leo more or less buggy. In future, however, the new unit testing framework will make it easier to add new unit tests.

Imo, there is no urgent need for more tests anywhere. The tests for @shadow seem to cover all the important cases that arise in the update algorithm. It is, however, necessary to convince oneself that the new tests are equivalent to the old. This I have done to my own satisfaction. I could be wrong, but weak unit tests don't change the code being tested.

The new test frame does highlight the lack of any unit tests for Leo's gui code. I am not inclined to add such tests. Using Leo on a daily basis seems sufficient, but the ongoing Qt6 bugs are troubling.

Just a few minutes ago another long-standing bug appeared. See #2169. I'll fix this into devel, then merge devel into ekr-unit-tests. I'm not sure a new unit test is warranted :-)

In short, the ekr-unit-test branch is a huge simplification of Leo's code, but I do not plan to do much more than fix bugs as they arise. But for the first time other devs (hint hint) may find Leo's unit testing framework pleasant to use. All contributions gratefully accepted!

Edward
Reply all
Reply to author
Forward
0 new messages