Definable Processes

2 views
Skip to first unread message

Corey Haines

unread,
Apr 29, 2009, 12:18:42 PM4/29/09
to software_cr...@googlegroups.com
Ugh! writing is painful. :)

On Wed, Apr 29, 2009 at 10:36 AM, mheusser <matt.h...@gmail.com> wrote:

On Apr 29, 5:53 am, Corey Haines <coreyhai...@gmail.com> wrote:
>'Software Craftsmanship is a reaction to Chaos in software
>development techniques.' Most developers cannot sit down and list out, in a
>sane fashion, their process for developing software without putting at least
>one 'and then a miracle occurs.'
>

Then again, most written process descriptions bear only the most vague
and superficial resemblance to what actually happens in the
development process. (Alistair Cockburn)

I'm less interested in a process can be described than if it can be
relied on; we learn that by experience.

All processes can be described, my experience has shown that without being reproducible, we can't rely on them. I do not talk about defining an end-all-be-all process for everyone; I am more interested in people being able to define their own process that they follow when the pressure mounts. Having a starting point for a reproducible process also allows you to evolve it as new learnings/experience arise.

Alistair's quote is a true, albeit sad and sorry description of the state of the software development industry. However, I don't think I explained myself well enough in my original comment, as his quote is not related to what I'm talking about. Here's some more ramblings of mine, guaranteed to diverge into other, possibly related topics, as well. :)

Most development is done in the following style:
- Receive written specification
- Perhaps have a brief conversation about it with someone
- Start writing code
- Write code until you have some way of running the application
- Manually go through application to see if it works according to your understanding of the specification
- Get happy path working
- Write a few edge case guard conditions in various spots of the application, often times choosing the 'most convenient' place to put the code, rather than where it actually makes sense, since it is fastest to just 'stick it somewhere'
- Run application again, try a couple invalid inputs, make sure that application doesn't crash
- Mark feature as done and pass to QA
- Find that specification was misunderstood / underspecified
- Add some code to approach actual desired functionality, often times choosing the 'most convenient' place to put the code, rather than where it actually makes sense, since it is fastest to just 'stick it somewhere'
- Manually run through application to see if new stuff works, not re-running through application for anything else
- Mark feature as done and pass to QA
- Get bug report (often times a re-emergence of a previous bug that was supposedly squashed)
- Throw some code in to 'fix' bug, often times choosing the 'most convenient' place to put the code, rather than where it actually makes sense, because that might involve doing some design change, which is frightening, because you aren't completely sure if you are going to break something
- Put bug in retest
- Wash, rinse, repeat, bitch about QA

Now, here is the process that I generally see when the deadline looms
- Plow through issue reports by throwing code in wherever you can to counter the bugs, rather than where it actually makes sense, dirtying the codebase dramatically
- Justify your decision to dirty your code base by using such wonderfully useful terms as 'pragmatism' and 'technical debt'


Now, here is what I see and hear about in a lot of 'Agile' shops: take the first process, use it for a short bit, then immediately drop down to the second (deadline-looming) process, because we are now 'iterative' and, rather than putting the deadline pressure on every 6 months, or so, let's make the deadline every two weeks to a month. Huzza! Developers write their worst code under deadline pressure, so let's make their deadlines more frequent.

In fact, my opinion is that this isn't even a process at all; it is better called a 'style.' In fact, I have a name for this style: 'hacking away at code.'


My experience is that this style of development is fundamentally flawed. Everyone complains about the fact that, over time, code bases get increasingly more difficult to work with. Making changes costs more, you end up with a lot of spaghetti code, duplication, classes/methods that are huge, violations of all sorts of simple design guidelines. I've spent a lot of time and energy justifying these things to both myself and others. After all, people don't tend to write 100 line methods, instead they tend to grow incrementally over time. Inevitable? No, I don't think it is. It happens because people are not willing to be methodical and careful in their development.

Here's the process that I follow when writing software:
- Talk about what is needed with someone who knows
- Begin writing my understanding of the desired functionality in a format that can be executed against the actual system, these days in cucumber
- After having a fairly complete set of initial specifications, being running them
- Watch specifications fail
- Begin writing executable specifications at a lower level (for me, using RSpec) to drive out design to satisfy initial executable specification
- After every bit of code that I add, verify that I have not broken pre-existing, defined functionality (I can do this, because pre-existing functionality can be verified quickly)
- Wash, Rinse, Repeat
- After specifications are passing, begin exploratory testing of system
- When bug is found, write executable specification at lower level to expose issue, watch it fail
- Alter code to get bug-exposing specification to pass. If needed, spend time changing design to provide appropriate place for bug fix

Now, here is the process that I use when the deadline looms; it is different because my skills in using cucumber is lacking (although I'm practicing to get it up to snuff)
- Talk about what is needed with someone who knows
- Write a few, thought not nearly a complete set of, cucumber scenarios to give me something to work towards
- Start writing executable specifications at a lower level (for me, using RSpec) to drive out design to satisfy required functionality
- Rely on the fact that my experience has shown that, with adequate usage of lower-level specification-verification tools (such as RSpec), system ends up with less issues and more robust design
- Wash, Rinse, Repeat
- When bug is found, write executable specification at lower level to expose issue, watch it fail
- Alter code to get bug-exposing specification to pass. If design change is needed to apply fix in the most appropriate place, weigh time needed with inappropriateness of the second most appropriate spot to apply bug fix.

Now, my view is that there is a fundamental difference between the first 'style' I outlined and the second 'process' that I described. The 'hacking away at code' style relies heavily on faith that the changes I made didn't break something else, as, with an evenly mildly complicated system, regression testing is well-nigh impossible. Applying code changes in 'most convenient' spots is a common activity that results in a rapidly rotting codebase, resulting in fear-based development (don't want to touch a certain part of the system, since we aren't sure what will happen) and longer amounts of time needed to take changes to production. The second 'process' that I defined is a reproducible, step-by-step description of how I develop software. I can drill in to parts of it and explain them. For example, how do I use a lower-level executable specification tool to evolve my design? Here's the process I use:
- Write a one-line description of a small interaction between a unit in my system and the units it collaborates with
- Express that description in executable code that isolates the unit under test from collaborators
- Fake the collaborators into providing deterministic results to request from unit under test
- Execute the specification
- Watch it fail
- Add enough code to unit under test to get specification passing
- Run previous specifications for unit under test to verify that it works as it did before I made change
- Look at just-written specification for possible refactorings to eliminate duplication in specification
- Refactor specification
- Look at unit under test for duplication or violation of other simple design guidelines (expresses intent)
- Refactor unit under test to eliminate problems found in previous step
- Wash, Rinse, Repeat

Now, is the process I follow the exact same process I followed last year at this time? Nope, it isn't. As I've learned more and gained experience, I have evolved my process to incorporate those. The key, though, is that I've evolved my process consciously.

Is this the process that everyone should use? I don't care. What I do care about is that people have a process that they do follow. Not a 'style' or a off-the-cuff way of coding, but a set, experience-based process for writing robust, maintanable, malleable code that they can be proud of over the long-term.

Whew, that was a lot of typing. Sorry about it.
-Corey




--
http://www.coreyhaines.com
The Internet's Premiere source of information about Corey Haines

Eric Smith

unread,
Apr 29, 2009, 12:20:29 PM4/29/09
to software_cr...@googlegroups.com
You know Corey - there is this new fangled medium called "blogging."  You should totally try it.

Corey Haines

unread,
Apr 29, 2009, 12:24:27 PM4/29/09
to software_cr...@googlegroups.com
haha! I suppose I could clean this up and move it over to my blog. I need a ghost writer. :)

-Corey
Reply all
Reply to author
Forward
0 new messages