The majority (if not all) of valid uses for setup and teardown methods can be written as factory methods which allows for DRY without getting into issues that seem to be plagued with the setup/teardown paradigm.
If you're implementing the teardown, typically this means you're not doing a unit test, but rather an integration test. A lot of people use this as a reason to not have a teardown, but IMO there should be both integration and unit test. I would personally separate them into separate assemblies, but I think a good testing framework should be able to support both types of testing. Not all good testing is going to be unit testing.
However, with the setup there seems to be a number of reasons why you need to do things before a test is actually run. For example, construction of object state to prep for the test (for instance setting up a Dependency Injection framework). This is a valid reason for a setup, but could just as easily be done with a factory.
My biggest problem that I have had with using the setup/teardown paradigm is that my tests don't always follow the same pattern. This has brought me into using factory patterns instead, which allows me to have DRY while at the same time being readable and not at all confusing to other developers. Going the factory route, I've been able to have my cake and eat it to.
They've really helped with our test maintainability. Our "unit" tests are actually full end-to-end integration tests that write to the DB and check the results. Not my fault, they were like that when I got here, and I'm working to change things.
Anyway, if one test failed, it went on to the next one, trying to enter the same user from the first test in the DB, violating a uniqueness constraint, and the failures just cascaded from there. Moving the user creation/deletion into the [Fixture][SetUpTearDown] methods allowed us to see the one test that failed without everything going haywire, and made my life a lot easier and less stabby.
I think the DRY principle applies just as much for tests as it does for code, however its application is different. In code you go to much greater lengths to literally not do the same thing in two different parts of the code. In tests the need to do that (do a lot of the same setup) is certainly a smell, but the solution is not necessarily to factor out the duplication into a setup method. It may be make the state easier to set up in the class itself or to isolate the code under test so it is less dependent on this amount of state to be meaningful.
Given the general goal of only testing one thing per test, it really isn't possible to avoid doing a lot of the same thing over and over again in certain cases (such as creating an object of a certain type). If you find you have a lot of that, it may be worth rethinking the test approach, such as introducing parametrized tests and the like.
I think setup and teardown should be primarily for establishing the environment (such as injections to make the environment a test one rather than a production one), and should not contain steps that are part and parcel of the test.
I agree with everything Joseph has to say, especially the part about tearDown being a sign of writing integration tests (and 99% of the time is what I've used it for), but in addition to that I'd say that the use of setup is a good indicator of when tests should be logically grouped together and when they should be split into multiple test classes.
I have no problem with large setup methods when applying tests to legacy code, but the setup should be common to every test in the suite. When you find yourself having the setup method really doing multiple bits of setup, then it's time to split your tests into multiple cases.
I use setup quite frequently in Java and Python, frequently to set up collaborators (either real or test, depending). If the object under test has no constructors or just the collaborators as constructors I will create the object. For a simple value class I usually don't bother with them.
I use teardown very infrequently in Java. In Python it was used more often because I was more likely to change global state (in particular, monkey patching modules to get users of those modules under test). In that case I want a teardown that will guaranteed to be called if a test failed.
The issue to me is that if you have a test setup and teardown method, it implies that the same test object is being reused for each test. This is a potential error vector, as if you forget to clean up some element of state between tests, your test results can become order-dependent. What we really want is tests that do not share any state.
xUnit.Net gets rid of setup/teardown, because it creates a new object for each test that is run. In essence, the constructor becomes the setup method, and the finalizer becomes the teardown method. There's no (object-level) state held between tests, eliminating this potential error vector.
Most tests that I write have some amount of setup, even if it's just creating the mocks I need and wiring the object being tested up to the mocks. What they don't do is share any state between tests. Teardown is just making sure that I don't share that state.
Setup and tear down are convenience methods - they shouldn't attempt to do much more than initialize a class using its default constructor, etc. Common code that three tests need in a five test class shouldn't appear there - each of the three tests should call this code directly. This also keeps tests from stepping on each other's toes and breaking a bunch of tests just because you changed a common initalization routine. The main problem is that this will be called before all tests - not just specific tests. Most tests should be simple, and the more complex ones will need initialization code, but it is easier to see the simplicity of the simple tests when you don't have to trace through a complex initialization in set up and complex destruction in tear down while thinking about what the test is actually supposed to accomplish.
Personally, I've found setup and teardown aren't always evil, and that this line of reasoning is a bit dogmatic. But I have no problem calling them acode smell for unit tests. I feel their use should be justified, for a few reasons:
To the extent that my setup/teardown doesn't do this, I think their use is warranted. There will always be some duplication in tests. Neal Ford states this as "Tests can be wet but not soaking..." Also, I think their use is more justified when we're not talking about unit tests specifically, but integration tests more broadly.
Working on my own, this has never really been a problem. But I've found it very difficult to maintain test suites in a team setting, and it tends to be because we don't understand each other's code immediately, or don't want to have to step through it to understand it. From a test perspective, I've found allowing some duplication in tests eases this burden.
In our pipeline we have one step on self host (that run in parallel with other steps ) that has issue with the Build teardown taking forever (no artifacts are upload and no caches ) and I have some questions about it:
@Theodora Boudale hi - any chance you have good news for me on how one can avoid the scan complete ?
Not all pipeline run tests so I don't get why on each pipeline BB need to scan for tests result this is really a pain point for us
The public issue tracker is where our product managers and development team track feature requests. I would suggest adding your vote to that feature request (by selecting the Vote for this issue link) as the number of votes helps us better understand the demand for new features. You can ask your team members to add their votes as well.
I would also suggest adding a comment with feedback and your use case, and adding yourself as a watcher (by selecting the Start watching this issue link) if you'd like to get notified via email on updates.
One thing you could try is creating a directory with depth 5 and moving the node_modules directory there. If you do that, you may need to change the configuration in your project so that it can find the node_modules directory in the new location.
If you would like us to investigate why the 'Build teardown' is taking such a long time for your builds, please create a support ticket by following the steps I mentioned in my previous reply, otherwise, we cannot access the repo and investigate. We need a support ticket for this instead of a community question.
We don't have any documentation specific to the "Build teardown". I believe that the times displayed in the Pipelines log get updated when there is output, so one possible reason for the time being displayed as
It is hard to say what is happening though without checking additional logs. I would advise creating a ticket with the support team and providing the URL of a build with this issue so we can further investigate. With a support ticket open, the engineer working on your case will be able to access the repo and additional logs in order to investigate.
Several times a day I get a connection drop which has started to become very annoying. I finally decided to look at the logs for the router and have been seeing a teardown and release every time I experience one of these connection drops. I've looked it up and have seen some others have experienced this same problem but there are no public solutions posted, only we will email you.
Much like this post -link.com/en/home/forum/topic/218114. I don't want to post my logs because it give out my ip, but the logs are [number]teardown and release followed up by a [number]send dhcp release ip. How can I fix this?
2. Check and ensure the internet line or ISP modem is stable, bypassing the TP-Link router. Connect the PC directly to the ISP modem (or internet cable from wall), test and monitor the internet connectivity, ensure you have a stable connection. Check this guide for more suggestions: [Troubleshooting] Router Disconnects from Internet.
c80f0f1006