Testing style will tend to be shaped by both personal preference and the needs of the project.
Personally, I do a ton of data-driven tests, as is required by my
data intensive projects (custom databases; analytics on them).
I often use a fork of goconvey for my testing. It promotes BDD
style of documenting your tests, which is
very helpful when revisiting them months or years later.
You state the intent of the test in a string
that is passed into the test framework. The
typical "Given... when... then..." style is useful.
https://github.com/smartystreets/goconveyFor tests of a database, I tend prefer to write both
my test output and my expected output to a file.
This keeps the expected output independent of
the code. Since the output of the database may
change dramatically (and then thousands of lines
of output may change due to a small code change),
this makes it very easy to update the expected correct
output when making a change to the processing pipeline.
If the expected output was hardcoded in the test code,
updating tests when the correct output
changes alot would be excruciating. By using files
to store the thousands of lines of output,
updating tests when the expected output changes is
very easy. Just copy over the new
output to the expected file, and done. Standard command
line tools like diff, head, and tail
make it easy to compare observed and expected
output of the test. I wrote a simple
test function to compare the two files
on disk. I'll copy it below. Expected output files
are version controlled, just like the test code.
This is also addresses a major pain point of doing the
extensive testing need to develop
working software. If you code the expected
value into your tests, they become alot of
work to update when the expected values change.
Have you ever broken 500 tests with one code change?
I certainly have. With expected on disk, its all
updated with a short bash script.
Obviously this is perhaps an unusual approach. Certainly
developers unfamiliar with
it may have a knee jerk reaction calling it
outrageous. Yet, it is extremely effective and efficient
for my projects.
Hopefully this example gives you a sense of why the
best practice is very dependent on the
project at hand.
It is worth being familiar with table driven tests as well.
// CompareFiles runs diff on the two given files, and returns the length of the diff output.
func CompareFiles(expected string, observed string) (int, []byte) {
cmd := exec.Command("/usr/bin/diff", "-b", observed, expected)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
fmt.Printf("CompareFiles(): error during '\ndiff %s %s\n': %s", observed, expected, err)
return -1, nil // unknown, but not 0
}
N := len(out.Bytes())
return N, out.Bytes()
}