Onepattern which is super simple but really useful to implement is the use of the expression_is_true test, so you can define your own logic without having to write the sql for a test. I used this a lot in financial models when I had to validate that subtotal columns were equal to a total, that financial values were positive, etc. etc., so my schema.yml would look like:
Our team needed to delineate between warning and error thresholds for a number of tests, including dbt built ins like unique and not_null, based on a configurable threshold. For the built ins, we overrode the core implementation, adding support for thresholds and severity. Example implementation for our overridden implementation of unique:
I've been looking into custom test harnesses like libtest-mimic, which implement the protocol needed for libtest and nextest. But all the uses I can find either use harness = false to make integration tests (tests/*.rs) have use their own main that hands off to libtest-mimic or they just use main() in examples.
I want examples that can run directly like cargo run --example fun_stuff --fun --args, but are also unit-testable via cargo test --example fun_stuff (or as a batch with cargo test --examples). With standard libtest, one can drop this at the end of examples/
fun_stuff.rs to ensure it gets run that way:
(Of course you can also have ordinary unit tests.) This causes creation of a special testing executable and populates it with a testing main() that gets called instead of the ordinary main() in the example. I'd like to do something related to the above using a custom test harness, but I'm having trouble tracking down where this testing main() is defined and how to do that with a custom harness.
I don't know if there is a way to employ the magic implicit main() replacement mechanism that the built-in harness uses (I doubt it), but you can mimic its effects while using harness = false by changing what main() does under cfg(test):
I'm not sure what exactly you are intending here. In the normal harness, there is no such thing; the test binary can't do anything but run its tests. Can you explain under what conditions the ordinary_mode would be needed? I suspect it is not.
and analyze the output. That mpiexec -n 3 runs as a parallel 3-process job that communicates between the processes. I can't invoke the "normal" target/debug/examples/this because it doesn't get built by cargo test.
For your particular use case of an example that needs to start itself to test itself, adding a flag to your custom test harness which starts the ordinary main does sound like the best option available.
I think it's pretty common to test libraries using command-line examples that can be a starting place for a user and also provide a way to explore library functionality interactively. But in normal use, nothing automated tests that those examples actually run, and I've not-infrequently found projects with stale examples despite CI checking cargo test.
That's a good point. You could propose and implement a feature to support this. (The way I'd do it is not put tests in the examples itself, because the example code should be free of distractions, but extend the CARGO_BIN_EXE_* mechanism to optionally cover examples as well, so that separate tests in tests/ could run the examples.)
I would like to have a master test suite for the entire project, while maintaining per module suites of unit tests and fixtures that can be run independently. I'll also be using a mock server to test various networking edge cases.
I also wrote a Perl script which will auto-generate the makefile and project skeleton from a list of class names, including both the "all-in-one" test suite and a stand alone test suite for each class. It's called makeSimple and can be downloaded from Sourceforge.net.
What I found to be the basic problem is that if you want to split your tests into multiple files you have to link against the pre-compiled test runtime and not use the "headers only" version of Boost.Test. You have to add #define BOOST_TEST_DYN_LINK to each file and when including the Boost headers for example use instead of .
One thing that you might be asking for is the ability to have more than one .cpp file in your test program. That's as simple as only defining BOOST_TEST_MODULE in one of those .cpp files. We have a "driver.cpp" file in all our test programs that just defines that and includes the unit test header. All the rest of the .cpp files (scoped by module or concept) only include the unit test header, they do not define that variable.
If you're looking for a way to run a bunch of test programs in one single run and get a report then you could look at the automake method of doing tests or, better yet, the CMake method (CTest). Pretty sure you can use CTest from your own makefile if you insist.
Testing can be a really dry topic. It's so dry that every time I think about testing, all I want to do is go for a swim at the beach. Today is also a really nice beach weather day too but I can't work at the beach so instead, I'm going to make a beach plan using Azure Test Plans.
Azure Test Plans is exactly what it sounds like. It's a tool set that allows you to create test plans for manual and exploratory testing. Manual testing is really just human testing. But, humans are often prone to errors so manual testing usually follows a strict plan. Exploratory testing on the other hand is creative testing. And like most things creative, this doesn't follow a strict plan. Azure Test Plans is actually a part of Azure DevOps. And what is Azure DevOps, you ask? Good question, let me get back to you after my beach trip...
There are a bunch of activities I want to do at the beach. Think of it as my requirements for this beach. At this beach, I want to be able to swim, surf, tan, build sand castles, and play beach volleyball.
For each activity, I'm going to create a New Test Suite in my beach plan. Each suite will represent an activity because I want to separate each activity so I can focus on testing out each beach activity thoroughly and because I take my beach activities very seriously this might often result in more than one test cases.
These are the steps that I think are necessary to validate this particular test case. I attached an image to the 4th step to show a picture of someone swimming to demonstrate what I mean by "Start swimming". And because swimming is my favorite activity to do, I set the Priority status to 1 to indicate that this is a very important test case because if I can't swim at this beach then I do not want to go to this beach. Then I hit that blue "Save & Close" button and my very first test case is created.
Next, I'm going to create some test cases for my Beach Volleyball scenario but this time I will be using the grid method instead. The grid method is a quick and easy way to add test cases to a test suite especially if you have multiple test cases.
The country configuration variable will contain a list of countries whereby the beaches I'm interested in are located. I can add more countries but I probably won't use them because I'm only interested in testing out those five beaches.
Now that my two configuration variables are set up, I can start adding my test configurations or beach configurations. For the Shipwreck Beach, I need to create a new test configuration and name it, 'Shipwreck Beach' and add country and sand configuration variables to it with the values set as 'Greece' and 'White' respectively. I repeat this process for the other four beaches until I have a total of five test configurations (five beaches in which I can execute my beach plan on).
I can capture a recording of all the actions I took and include it as an attachment. I can use the camera to capture something specific that happened and include that as an attachment. I can also add comments to each step as I run through the test case. And at each step, I have to indicate whether the action lead to the expected result.
Let's say at step 4, I start swimming and find that I overestimated my swimming abilities and cannot swim at all. Instead of swimming, I start to drown and call for help. Shipwreck beach was not designed with people who could not swim in mind so there are no lifeguards around. Normally, someone on the beach who can swim might jump into the water to rescue me or call Emergency services but in the Testing world, we are operating in a testing environment so we have enough time to take corrective measures so that we don't have to constantly bother Emergency services.
Here, I can create a bug during the test run and assign it to the beach developer to tell them that there is a flaw to the beach so they can make the appropriate fix e.g. hire some lifeguards! The bug will contain all the test run details so when the beach developer inspects this bug, they can see when and where the test case failed and figure out what the right fix is. The attachments will also be accessible inside this bug. I can also set the severity of this bug to 1 which means fixing this bug is crucial.
Like I mentioned before, exploratory testing gives the tester the freedom to be creative in their testing ways. Instead of creating a test plan to follow through, we give them a set of tools and the product to test. In my case, let's say I couldn't be bothered writing up a formal beach test plan but I still want to go to the beach and find out how well the beach can accommodate to my swimming, surfing, tanning, sand castle building, and beach volleyball needs. Or, let's say I didn't know how to test a beach at all so what I can do is use exploratory testing to figure out what the relevant test cases would be to test a beach.
3a8082e126