My own tests come from 3 sources. First, I almost always do test-driven programming, and these tests become the first part of the test suite. Second, any bug, I add a regression test. Third, I sometimes explicitly want something in the test suite that's not there from the other two sources, and add that as the last, pre-release phase. More often, I have cast the net wide in the development tests and bug-fix tests, so that when the last bug is fixed I have a quite adequate test suite already.
Also, I often prioritize debugging and tracing tests, sometimes writing the logic before the code that they debug and/or trace. That way they are available when I develop. Of course, I'm always eager to do the "real" programming. But if it's trace/debug stuff you *know* you'll need (and often you do), you gain time overall by delaying work on the core logic until they are in place. This relates to testing, because it means when I finish development my trace/debug code is itself thoroughly tested.
I often have tests of the trace and debug logic in the test suite. This is not too hard to do if you've developed the trace/debug logic and debugged it along with the main code -- you develop it on a test-driven basis and when done, simply add that test to the test suite. I notice that other programmers rarely test their diagnostics and tracing code, and it can be a chore, but I find it's a life-saver.