In the previous article, I introduced the basics oftesting in Go, covering the standard library's testing capabilities, how to runtests and interpret results, and how to generate and view code coverage reports.
While those techniques are a great starting point, real-world code often demandsmore sophisticated testing strategies. You might face challenges like slowexecution, managing dependencies, and making test results easily understandable.
To demonstrate the various techniques I'll be introducing in this article, I'vecreated aGitHub repositorywhich you can clone and work with on your local machine. We'll be testing asimple function that takes a JSON string and pretty prints it to make it morehuman-readable.
When writing certain tests, you may need additional data to support the testcases, and to enable consistent and repeatable testing. These are called testfixtures, and it's a standard practice to place them within a testdatadirectory alongside your test files.
For instance, consider a simple package designed to format JSON data. Testingthis package will involve using fixtures to ensure the formatter consistentlyproduces the correct output. These fixtures might include various filescontaining JSON strings formatted differently.
The outcome is then evaluated based on the hasErr field. If an error isexpected, and PrettyPrintJSON does not return an error, the test fails becauseit indicates a failure in the function's error-handling logic. Conversely, if anerror occurs when none is expected, the test also fails.
Testing often involves asserting that the output from a function matches anexpected result. This becomes challenging with complex outputs, such as longHTML strings, intricate JSON responses, or even binary data. To address this,we'll use golden files.
In the previous section, we used test fixtures to provide raw JSON data forformatting. Now, we'll enhance our testing approach by using a golden file toensure that the formatted output from the PrettyPrintJSON function remainsconsistent over time.
The verifyMatch() function uses the goldie package to assert against theformatted JSON output produced by the PrettyPrintJSON() function, but it willfail initially because there's no golden file present at the moment:
Just like production code, test code should be maintainable and readable. Ahallmark of well-crafted code is its modular structure, achieved by breakingdown complex tasks into smaller, manageable functions. This principle holds truein test environments as well, where these smaller, purpose-specific functionsare known as test helpers.
Test helpers not only streamline code by abstracting repetitive tasks but alsoenhance re-usability. For instance, if several tests require the same objectconfiguration or database connection setup, it's inefficient and error-prone toduplicate this setup code across multiple tests.
To illustrate the benefit of test helpers, let's update the verifyMatch()function introduced earlier. To designate a function as a test helper in Go, uset.Helper(). This call is best placed at the beginning of the function toensure that any errors are reported in the context of the test that invoked thehelper, rather than within the helper function itself.
Debugging can become more challenging without marking the function witht.Helper(). When a test fails, Go's testing framework will report the errorlocation within the helper function itself, not at the point where the helperwas called. This can obscure which test case failed, especially when multipletest functions use the same helper.
Testing often involves initializing resources or configuring dependencies beforeexecuting the tests. This setup could range from creating databases and tablesto seeding data, especially when testing database interactions like with aPostgreSQL database.
Implementing setup and teardown routines is essential to streamline this processand avoid repetition across tests. For example, if you want to test yourPostgreSQL database implementation, several preparatory steps are necessary suchas:
While the steps above can be easily achieved, it becomes a big pile ofrepetition when you have to write multiple tests that have to do each steprepeatedly. This is where implementing setup and teardown logic makes sense.
Before we can write the corresponding tests, let's create the migration filesthat will contain the logic to set up the database tables and also make sense ofthe data we want to load the database with.
We now have both our migrations and sample data ready. The next step is toimplement the setup function which will be called for each test function. Wehave two methods in the postgres/user.go file so this ideally means we willwrite two tests. Having a setup function means we can easily reuse the setuplogic for both tests.
The above code defines a setup for testing with a PostgreSQL database in Go,using thetestcontainers-go libraryto create a real database environment in Docker containers. We have thefollowing two functions:
setupDatabase(): Acts as the main setup function that initializes a newPostgreSQL container, sets up the database, loads sample data, and returns aclosure for tearing down the environment. This closure should be invoked atthe completion of each test to properly clean up and shut down the databasecontainer.
prepareTestDatabase(): Serves as a helper function to keep thesetupDatabase() function concise. It is responsible for seeding the databasewith sample data using thetestfixtures andgolang-migrate packages.
Go tests are executed serially by default, meaning that each test runs onlyafter the previous one has completed. This approach is manageable with fewtests, but as your suite grows, the total execution time can become significant.
The end goal is to have a lot of tests, run them, and be confident they all passbut not at the expense of the developer's time waiting for them to pass or fail.To accelerate the testing process, Go can execute tests in parallel.
While Go's test runner produces output that can be easily read and understood,there are ways to make it much more readable. For example, using colors todenote failed and passed tests, getting a detailed summary of all executed testsamong others.
Testing in Go is generally a straightforward process: invoke a function orsystem, provide inputs, and verify the outputs. However, there are two primaryapproaches to this process: Whitebox testing and Blackbox testing.
Throughout this tutorial, we've primarily engaged in Whitebox testing. Thisapproach involves accessing and inspecting the internal implementations of thefunctions under test by placing the test file in the same package as the codeunder test.
Its main disadvantage is that such tests can be more brittle since they arecoupled to the program's internal structure. For example, if you change thealgorithm used to compute some result, the test can break even if the finaloutput is exactly the same.
Blackbox testing involves testing a software system without any knowledge of theapplication's internal workings. The test does not assert against the underlyinglogic of the function but merely checks if the software behaves as expected froman external viewpoint.
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
It does not really make sense to use the __main__ trick. Just assume that you have several files in your module, and it does not work anymore, because you don't want to run each source file separately when testing your module.
For anything more than a few modules I create the tests separately in a tests/ directory in the package. Having testing code mixed with the implementation adds unnecessary noise for anyone reading the code.
Personally, I create a tests/ folder in my source directory and try to, more or less, mirror my main source code hierarchy with unit test equivalents (having 1 module = 1 unit test module as a rule of thumb).
I generally keep test code in a separate module, and ship the module/package and tests in a single distribution. If the user installs using setup.py they can run the tests from the test directory to ensure that everything works in their environment, but only the module's code ends up under Lib/site-packages.
There might be reasons other than testing to use the if __name__ == '__main__' check. Keeping the tests in other modules leaves that option open to you. Also - if you refactor the implementation of a module and your tests are in another module that was not edited - you KNOW the tests have not been changed when you run them against the refactored code.
The Straightforward Pre-intermediate and Intermediate Placement test has been designed to help you decide whether the Straightforward Pre-intermediate course would be suitable for your students or whether they would qualify for using the Straightforward Intermediate Course.
The Straightforward test has 50 questions, each worth one point. The first 40 are grammar questions and the final 10 are vocabulary questions. The conversion chart below has been designed to assist you in making your decision but please note, however, that these bandings are a guide.
I've read some answers to questions along a similar line such as "How do you keep your unit tests working when refactoring?". In my case the scenario is slightly different in thatI've been given a project to review and bring in line with some standards we have, currently there are no tests at all for the project!
b1e95dc632