A better test runner

34 views
Skip to first unread message

Edward K. Ream

unread,
Nov 16, 2019, 9:30:35 AM11/16/19
to leo-editor
The fstring branch now contains a more flexible test runner.  It's signature is:

def test_token_traversers(contents, reports=None):

The post script shows the entire code.  Here is the test script that calls the test runner:

g.cls()
import imp
import leo.core.leoAst as leoAst
imp
.reload(leoAst)
small
= False
<< define contents >>
contents
= contents.strip() + '\n'
# Define the ordered list of reports.
# 'coverage', 'tokens','contents', 'diff','results', 'lines', 'tree'
reports
=  ['contents', 'diff'] if small else ['diff']
# Run the tests.
leoAst
.test_token_traversers(contents, reports + ['summary'])

Edward

P. S. Here is the test runner:

def test_token_traversers(contents, reports=None):
   
"""
    A testing framework for TokenOrderGenerator and related classes.
   
    The caller should call imp.reload if desired.
   
    Reports is a list of reports. A suggested order is shown below.
    """

   
# pylint: disable=import-self
   
import leo.core.leoAst as leoAst
   
    reports
= [z.lower() for z in reports or []]
   
assert isinstance(reports, list), repr(reports)

   
# Start test.
   
print('\nleoAst.py:test_token_traversers...\n')
    contents
= contents.strip() + '\n'
   
# Create tokens and tree.
    x
= leoAst.TokenOrderInjector()
    tokens
= x.make_tokens(contents)
    tree
= leoAst.parse_ast(contents)
   
# Catch exceptions so we can get data late.
   
try:
        ok
= True
        list
(x.create_links(tokens, tree))
   
except Exception:
        g
.es_exception()
        ok
= False
   
# Print reports, in the order they appear in the results list.
   
# The following is a reasoable order.
    bad_reports
= []
   
while reports:
        report
= reports.pop(0)
       
if report == 'coverage':
            x
.report_coverage(report_missing=False)
       
elif report == 'tokens':
           
print('\nTokens...\n')
           
# pylint: disable=not-an-iterable
           
for z in x.tokens:
               
print(z.dump())
       
elif report == 'contents':
           
print('\nContents...\n')
           
for i, z in enumerate(g.splitLines(contents)):
               
print(f"{i+1:<3} ", z.rstrip())
       
elif report == 'diff':
           
print('\nDiff...\n')
            x
.verify()
       
elif report == 'results':
           
print('\nResults...\n')
            results
= ''.join([b for a, b in x.results])
           
for i, z in enumerate(g.splitLines(results)):
                 
print(f"{i+1:<3} ", g.truncate(z.rstrip(), 60))
       
elif report == 'lines':
           
print('\nTOKEN lines...\n')
           
for z in tokens:
               
if z.line.strip():
                   
print(z.line.rstrip())
               
else:
                   
print(repr(z.line))
       
elif report == 'tree':
           
print('\nPatched tree...\n')
           
print(leoAst.AstDumper().brief_dump(tree))
       
elif report == 'summary':
           
if x.errors:
               
print('\nErrors...\n')
               
for z in x.errors:
                   
print('  ' + z)
               
print('')
            ok
= ok and not x.errors
           
print('')
           
print('PASS' if ok else 'FAIL')
       
else:
            bad_reports
.append(report)
   
if bad_reports:
       
print(f"\nIgnoring unknown reports {','.join(bad_reports)}\n")

EKR

Edward K. Ream

unread,
Nov 18, 2019, 4:38:31 PM11/18/19
to leo-editor
On Saturday, November 16, 2019 at 8:30:35 AM UTC-6, Edward K. Ream wrote:
The fstring branch now contains a more flexible test runner.  It's signature is:

def test_token_traversers(contents, reports=None):

Some updates:

1. It's now called "test_runner" :-)

2. A driver script, in my personal .leo file, calls the test runner.  Here it is:

import imp
import leo.core.leoAst as leoAst
imp
.reload(leoAst)


use_file
= False
path
= r'C:\leo.repo\leo-editor\leo\core\runLeo.py'

<< define contents >>
contents
= contents.strip() + '\n'

reports
= [
   
# 'coverage',
   
# 'fail-fast',
   
# 'contents',
   
# 'tokens',
   
# 'results',
   
'diff',
   
'assign-links',
   
# 'lines',
   
'tokens',
   
'results',
   
'tree',
   
'summary',
]
# Run the tests.
leoAst
.test_runner(contents, reports)

The driver contains several useful improvements:

1. There is only one list of switches.

In practice, I am continually enabling or disabling switches, so having each switch on a single line is convenient. A single list eliminates confusion.

2. There are two new switches, "fail-fast" and "no-fail-fast".

The test runner sets or clears its fail_fast var when it sees these. The runner exits its main loop (and reports failure) when a test fails and fail_fast is true.

Multiple "fail-fast" and "no-fail-switches" could exist in the list of switches, though at present there is only one real test.

3. The list of contents has been "off-loaded" to the << define contents >> section.

This foreshadows a single @test node that will get constituent unit tests from child nodes.

Headlines in the children will indicate whether the body text contain actual source code, or points to a source file, or maybe even a directory of files to test.

Summary

This is, by far, the most flexible test runner I have ever used.

The driver shown above will be the basis for a single @test node that runs a suite of constituent tests defined by its children.

Edward
Reply all
Reply to author
Forward
0 new messages