What kind good/bad practices/conventions do you use ?

126 views
Skip to first unread message

ldelabre

unread,
Jul 20, 2009, 2:58:04 PM7/20/09
to NUnit-Discuss
Hi again,
I would to know if what good or bad practices and conventions (for
testfixture's class and methods names...) when using nunit (and/or
unit testing in general) ?

...besides writing the tests first... of course ;)

Anything ?

Thanks,
Ludovic.

RJV

unread,
Jul 21, 2009, 2:27:13 AM7/21/09
to nunit-...@googlegroups.com
  1. One Assert per test (where possible).
  2. If there are any "if else" inside a test, move the branches to individual test methods.
  3. Methods under test, if they, too, have if else branches, the method should be refactored.
  4. Name of the method should be the kind of test. For instance, TestMakeReservation is different from TestMakeNoReservation().
Good practices, they are, they say. I do not know the bad practices except that they must be contrary to the above.
 
Regards,
 
Ravichandran Jv

 

Charlie Poole

unread,
Jul 21, 2009, 3:16:12 AM7/21/09
to nunit-...@googlegroups.com
Hi Ludovic,

> I would to know if what good or bad practices and conventions
> (for testfixture's class and methods names...) when using
> nunit (and/or unit testing in general) ?
>
> ...besides writing the tests first... of course ;)

In terms of the names, I think there would have to be several
different sets of conventions, depending on what a test fixture
represents. I usually use the classic fixture orientation,
with a separate TestFixture for each different required fixture.
Others - especially those oriented toward BDD - use a test class
per context. There may be other paradigms. Each such approach
would call for different naming conventions.

Charlie

Sébastien Lorion

unread,
Jul 22, 2009, 11:30:50 AM7/22/09
to nunit-...@googlegroups.com
Does everyone agree with these ? Seems to me that these conventions
will quickly lead to an explosion of methods, especially if you apply
them also to the code in the libraries. I personally prefer thinking
in terms of testing a logical unit of work or said differently, an
atomic action. If that requires more than one assert, than so be it.

Sébastien

Charlie Poole

unread,
Jul 22, 2009, 1:29:38 PM7/22/09
to nunit-...@googlegroups.com
Silence does not imply agreement. :-)

I generally teach my teams that each test should test one thing,
which can be expressed in the name. At the level of a unit test,
which I guess is what we are discussing, this means there should
be one "logical assert." Sometimes, due to a lack of expressiveness
in the testing api, you need to write multiple physical asserts
to achieve the desired result. Much of the development in the
NUnit framework api has been out of an attempt to get a single
assert to do more work.

All that said, I often tell beginners to write one assert per
test rather than confusing them with all the above. :-)

I do agree that a branch in a test is usually a bad thing. It
may lead to creating two tests or it may cause me to perform
some other refactoring.

Charlie

bcook

unread,
Jul 22, 2009, 2:11:36 PM7/22/09
to NUnit-Discuss
I put together this list of do/consider/avoid conventions for TDD:
http://www.bryancook.net/2008/06/test-naming-conventions-guidelines.html
> > On Tue, Jul 21, 2009 at 02:27, RJV<jv.ravichand...@gmail.com> wrote:
> > > One Assert per test (where possible).
> > > If there are any "if else" inside a test, move the branches to
> > > individual test methods.
> > > Methods under test, if they, too, have if else branches, the method
> > > should be refactored.
> > > Name of the method should be the kind of test. For instance,
> > > TestMakeReservation is different from TestMakeNoReservation().
>
> > > Good practices, they are, they say. I do not know the bad practices
> > > except that they must be contrary to the above.
>
> > > Regards,
>
> > > Ravichandran Jv
>

RJV

unread,
Jul 23, 2009, 2:28:40 AM7/23/09
to nunit-...@googlegroups.com
Thanks for the link. But i find the "avoid" and "prefer" type of guidelines a little difficult to remember.
 
I recently heard that NotImplementedException is a design smell!! I tried getting a conclusive answer but could not. My contention is that from a Test First or a TD perspective it is proper. Another thing that I heard was that SetUp and TearDown are eveils.
 
Can someone throw light on what the above two opinons mean or is it just misinformed opinions?
 
RJv

 

Adam Connelly

unread,
Jul 23, 2009, 7:39:02 AM7/23/09
to nunit-...@googlegroups.com
I would not consider NotImplementedException to be a design smell - of course it depends how you are using it. I tend to use it while doing TDD when I'm stubbing out methods or put it in places so that I don't forget to implement stuff. I would consider it a problem if it made it into production code. I pretty much only use it as a temporary measure while I'm developing something.

I don't think there's any problem with using Setup and Teardown methods. I would argue that without them it's very difficult to create common fixtures for your tests which can then lead you to duplication (which is a code smell). For example, I doubt that anyone would argue that the following:

[TestFixture]
public class MyTestClass
{
  [Test]
  public void Test1()
  {
    MyObject obj = new MyObject();
    Assert.That(...);
  }

  [Test]
  public void Test2()
  {
    MyObject obj = new MyObject();
    Assert.That(...);
  }
}

was better than:

[TestFixture]
public class MyTestClass
{
  private MyObject obj;

  [SetUp]
  public void Setup()
  {
    obj = new MyObject();
  }

  [Test]
  public void Test1()
  {
    Assert.That(...);
  }

  [Test]
  public void Test2()
  {
    Assert.That(...);
  }
}

Unless I'm missing the question. Obviously it makes more and more sense as you add more tests that use the common fixture.

Cheers,
Adam

2009/7/23 RJV <jv.ravi...@gmail.com>

Sébastien Lorion

unread,
Jul 23, 2009, 10:19:58 AM7/23/09
to nunit-...@googlegroups.com
I agree with you on the principle, but just want to point out that in
your example, if MyObject is not stateless or immutable, I would
prefer the first version.

Sébastien

Adam Connelly

unread,
Jul 24, 2009, 6:11:18 AM7/24/09
to nunit-...@googlegroups.com
Is that just for the sake of clarity or do you have some other reason?

Cheers,
Adam

2009/7/23 Sébastien Lorion <s...@thestrangefactory.com>

Sébastien Lorion

unread,
Jul 24, 2009, 11:46:38 AM7/24/09
to nunit-...@googlegroups.com
To avoid side-effects on the shared instances and unintentionnal
reliance on the order of tests execution due to them.

Sébastien

Mark Levison

unread,
Jul 24, 2009, 2:36:07 PM7/24/09
to nunit-...@googlegroups.com
Isn't setup rerun for every test? Isn't that the whole point of setup?

Mark Levison

unread,
Jul 24, 2009, 2:36:40 PM7/24/09
to nunit-...@googlegroups.com
BTW I summarized this thread and added a few others I found on InfoQ: http://www.infoq.com/news/2009/07/Better-Unit-Tests

Cheers
Mark

Sébastien Lorion

unread,
Jul 24, 2009, 3:01:02 PM7/24/09
to nunit-...@googlegroups.com
Ah right, my bad, I read too fast the OP and thought we were talking
about Fixture setup/teardown.

Sébastien

Adam Connelly

unread,
Jul 28, 2009, 4:17:19 AM7/28/09
to nunit-...@googlegroups.com
That makes more sense - that's what was confusing me. I haven't actually written any tests that make use of fixture setup and teardown so far. Just out of interest, is it a feature that other people use quite often?

Adam

2009/7/24 Sébastien Lorion <s...@thestrangefactory.com>

Adam Connelly

unread,
Jul 28, 2009, 4:25:26 AM7/28/09
to nunit-...@googlegroups.com
Here's a question for anyone. The following rule has been mentioned:

"Methods under test, if they, too, have if else branches, the method should be refactored."

Now I understand refactoring a test with an if-else branch, but could someone give me an example of refactoring methods with if-else branches? It's just I'm not 100% sure what you mean by it.

Cheers,
Adam

2009/7/28 Adam Connelly <adam.rp...@googlemail.com>

RJV

unread,
Jul 28, 2009, 4:32:07 AM7/28/09
to nunit-...@googlegroups.com
It is a very simple scenario. I though Tests with if-else branches were more complex to imagine.
 
public void Add(ReservationDetails rd){
if (rd.Rservation==true)
// ...
else if (rd.Reservation==false)
// ...
}
 
What it means is that if branches exist in a method it is considered to be doing more than what it is supposed to do. much like the one responsibility principle applied for classes.
 
In the above example, with .Reservation == false, it could mean many things - for example, the ticket contains details for refunds or it contains no reservation but still consists of a seat with the possibility that the passenger may have reservation in the subsequent legs of the journey and so on  and so forth. Switch...cases and if...else branches are considered the best candidates for refactoring.
 
RJv

 
Just for example's sake.
 

Adam Connelly

unread,
Jul 28, 2009, 7:16:11 AM7/28/09
to nunit-...@googlegroups.com
Ok, I think I understand what you mean. What was confusing me was that the implication seemed to be that if-else branches should always be refactored, whereas the comment was actually that it's a code smell, the same as switches.

Would you agree?

Adam

2009/7/28 RJV <jv.ravi...@gmail.com>

RJV

unread,
Jul 28, 2009, 7:21:56 AM7/28/09
to nunit-...@googlegroups.com
if...else is a set of conditions, which means that most likely they are helping in evaluating different scenarios (read use cases and therefore, operations) and implementing logic accordingly. So, a function would end up doing more than what it was designated to do.
 
Yes, it is code smell but i have found that refactoring "if...else" is a very good practice...it simplifies maintenance and increases readability. yes, it may result in a proliferation of methods but then there is a good side to it, which is that the test coverage would increase!
 
RJv

 
Reply all
Reply to author
Forward
0 new messages