Episode 19, Part 1 - Mistakes - a different approach.

329 views
Skip to first unread message

Andreas Altendorfer

unread,
Jun 13, 2013, 5:43:37 PM6/13/13
to clean-code...@googlegroups.com

Dear Uncle Bob,

First I want to send you a big hug for all the stuff you’ve been teaching me since I first met you at the RailsConf2010 in Baltimore.

I guess all the mistakes are mentioned yet and I fear there is no green band left for me anyway. So I took an hour off from my regular work to draft my version of a SortableName class. I will wonder what you and your audience will think about this approach which is highly inspired by Sandi Matz' talk at the latest RailsConf2013 in Portland.

1) Of course I did follow the GRY-cycle in very small steps, just as you told me. I guess this is the essence of Episode 19/1.

But then, I left the way and I’m willing to take your admonishment if I’m completely wrong.

2) I did it in Ruby (because my last Java-experience was long ago and I had no time to refresh my memories this afternoon. This shall make no difference because it’s about the way rather than the tool.

3.) start by setting up my environment in order to see green for

expect(true).to be_true

4.) I made the decision that I will definitely do it in a class. (I took the time for a cigarette and my conclusion was, that this is not too early. Not for the decision, neither for some nicotine and coffee)

4a) green for the initializer

4b) red for no parameter

4c) red/green for expecting an exception-error if the param is nil (I like to fail early and I thought it was time for a kinda joke ,)

5) here I began to leave your way by thinking, ok, the initializer will not do the work, neither can I expect it to do anything but remember the input string.

5a) I wrote a test to make sure a (private) message, responsible to format the output, is called before I output any instance of my class as a string.

6) I took your advise that it’s ok to do more than one physical assert in one test as long as it is all about the same logical assertion. So, I named my test ‘It should format all defined input-variations to match the expected output’.

I started with an empty array of Test-pairs and made sure the test compiles. It was even green because the array was empty and the inner assertion was never called. The test reads as

for pair in pairs do
  expect
(SortableName.new(pair.first).to eq(pair.last)
end


7.) I filled in the first pair ['',''] The test failed with a nice message telling me: '' is not nil. Of course, my format-function (tested just to be called in 5a) still returns nil.

8) I remembered that gorgeous talk by Sandi Matz, mentioning that one should not test private methods if they are tested through the public interface. So I wrote no more tests for my class, since all the stuff I will do from now on will be done in the private block.

I just return the @name instance-var of my class and the test passed.

9) I found nothing worth a refactor and my RGY-cycle for ['',''] was done.

10.) I added the pair [' ',''] to my definitions.

10a) test failed

10b) added a .strip to the output of my formatting method. -> passes

10c) moved the strip to the initializer thus the format method will have not to care about it.

11.) added ['Name','Name'] to my definition. Passes, the same way [' Name ','Name'] does.

Notice that I’m using SimpleCov to make sure all LOC are covered whenever I run the tests (even the private methods of course).

12.) ['First Last','Last, First'] failed.

12.a) added the split and reverse until green.

12b) cleaned up the mess

And so on….

  • a) add the pair to be tested to the definition array
  • b) see it fail with the output of what’s wrong
  • c) make it green.
  • d) clean up and make sure every LOC is covered (remove the lines which are not.)
  • Continue at a)

You can do so with any case of input one may come up with. Without writing a new test. Just add the test-pair and get back to your work. I love writing tests tho. Writing clean production code is even more fun.

At the end I implemented two more Integration-test just to see if it’s possible to get the name formatted for sorting and unformatted as passed when initializing an instance.

Finally I cleaned up everything until I was happy with the code (you can see it on Github).

IMHO I do not violate any of your rules, master, by following this strategy. But please, proof me wrong or even an idiot if I just didn’t see the truth.

Have a nice evening, it's time to sleep here in Austria.

- Andreas


Uncle Bob

unread,
Jun 14, 2013, 5:24:27 AM6/14/13
to clean-code...@googlegroups.com
Winner #19

Andreas,

You have pointed out a common dilemma in TDD.  Given that there is a set of test data to run through a single function, should I run each case as individual test functions, or should I put all those cases in a single test function? 

I will often do what you did, which is to create an array of input/expected pairs, and then run them through a loop.  This works very nicely when the _intent_ of the test, or the rule that the test is enforcing, is obvious from the data itself.  

Sometimes, however, the rule being tested is not obvious from the input/expected pair.  In those cases individual test methods are better.  

So, when I do the prime factors kata, I put all the assertions into a single test function.  If I write it in Ruby, I construct the array of pairs just as you did. 

Are the rules for name inversion obvious from the input/expected pairs?  That's not at all clear.  Consider your tests.  One of them is: ['Mr. First Last jun.', 'Last, First jun.']

In this test you have helpfully identified the 'first' and 'last' name by using the appropriate words.  But you did not helpfully identify the honorific or the postnominal.  You used a _real_ honorific and postnominal instead.  And of course you had to use a real honorific and postnominal because the production code checks for them by value.  

POSTNOMINALS = %w(PhD. BS SR JR esq sen. jun. I II III JEG2).map!(&:upcase)
  HONORIFICS = /Mr?s?\./

What this means is that the test cannot be understood without understanding the production code.  For example, this test would fail: ['Mr. First Last jun', 'Last, First jun']

Notice the missing dot? By looking at the original test, it's not at all clear that the dot is important. In fact, if you don't know that 'jun.' is a postnominal, you don't know that one of the test rules is about postnominals. I realize I'm stretching the issue. The point I'm trying to make (badly) is that if the tests are self documenting, there's no reason they can't be put into an array as you have done. If they aren't self documenting (if you can only understand the test by looking at the production code) then individual test functions can provide the extra documentation.

Andreas Altendorfer

unread,
Jun 15, 2013, 9:16:23 AM6/15/13
to clean-code...@googlegroups.com

Thank you for your helpful response. 


Of course, understanding the spec without knowing the implementation is important. I never thought about this because I'm used to work in very small teams or even on my own. I will do so from now on and I understand that it will be helpful even in a single developer-setup. I remember a few situations where I stuck and had to figure out why a spec doesn't work when it should. Maybe this leak of cleanness was the reason. 


I came up with this solutions:


Making the definition-pair a definition-triple or adding a comment to each pair would help to understand what the definition is about to test. 


For the problem where the POSTNOMINALS are defined as a constant within the implementation, I think the best way would be to add an extra spec, "all test-pairs should work with appended postnominals". This spec would iterate the test pairs again by appending the postnominals from the constant to every test-input. Doin' so makes it clear that there is a constant and a name on the input can have a POSTNOMINAL as a suffix without the need to know which values are defined inside the constant. Also this would shrinken the number of lines of test-pair-definitions (which is a bit long in my opinion, anyhow)

Changes: https://github.com/iboard/sortable_name_class/commit/345aa24dc17c7e5eb9144e799764713a6db25eb9

No need to reply - I don't want to stress this issue either ;-)

Looking forward for part II

best

 - Andreas

Daniel Karp

unread,
Jun 16, 2013, 9:32:48 AM6/16/13
to clean-code...@googlegroups.com
One solution I've used in this situation: 

Putting the data in an array, and adding a "comment" where the rule being tested is not obvious, but not in the form of an actual comment, but rather in the form of an additional variable. I work in PHP with PHPUnit, which allows a dataProvider (a function that returns the array of data (which in PHP can have mixed types), and runs a test case in a loop, passing one element of the array to the test for each run). If you add a "message" field to the array returned by the dataProvider, PHP unit will output that message, along with the other test data, when that element fails a test.

If that is available for your testing platform, it seems to me to provide a good mix of test code reuse with having the tests be self documenting--to the point that you can see what rule was being tested when a test fails without having to look at either production OR testing code.

Andreas Altendorfer

unread,
Jun 16, 2013, 12:37:55 PM6/16/13
to clean-code...@googlegroups.com
Daniel,

Displaying the comment with the message is definitely a good idea. 
In my case the fail-message includes the line of the test with the expected and the actually returned value. Then I have to open the spec-file to figure out what the tested 'Scenario' was.

I added the third column to my definitions and found a way to display the third column when a test fails. But now the test becomes a bit messy.

I hate it when I have to split up a logical line into more editor-lines. 
Currently I'm enjoying a lazy, relaxing, sunny sunday at the pool. But I will find a way to clean-up this mess asap. If there isn't a proper plugin for rspec, maybe it's time to give some value back to open-source-community and I will write one. Shouldn't be that hard.

best
 - Andreas
Reply all
Reply to author
Forward
0 new messages