I am cycling back to my Scheme->JavaScript compiler and figured that there is a problem about my testing method.
At the time of writing, what I do is compose all the pass starting from parse-and-rename Lsrc language, until somekind of Ltarget language, at which point a procedure takes the control and prints JavaScript code.
To test the whole thing, I created small snippets of scheme code, compile, execute it with nodejs redirect output to a file, at which point I verify the output of the file and commit that file as expected result.
That is to say, I test the output of the whole compiler based on controlled input. This is usually what I do when testing other projects, that is, I only test the public interface (and in rare cases difficult implementation details).
The thing, even if my compiler is simple so far, sometime I struggle to find the source of bugs because they are in the middle of the nanopass pipeline. I think I hit the "difficult implementation detail that requires testing".
That is why, I was thinking about something like the following:
(define minipass0 (make-minipass parser0 pass0 unparse0 eval0 write0))
(define pipeline (minipass-compose minipass0 minipass1 ....))
Mind `eval0` should be a meta-evaluator, in the ideal case it should be plain eval.
Imagine there is several minipass.
Then the tests will be specified as a file system tree as follow:
./tests/entry-pass/output-pass/program.scm
./tests/entry-pass/output-pass/expected.txt
where entry-pass and output-pass are minipassn. And expected.txt contains the verified result.
Does it makes sense?
I guess what I want to ask is how do you test your compiler?