"Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head"
I really like these 2 ideas, do you have any more info/references?
I'll try and add a little- I work for Forward (currently within uSwitch) and a lot of what we do evolved out of the way we approached problems when we were TrafficBroker (a small-ish paid search agency with an emphasis on tech).
In TrafficBroker a lot of what we did was to capture data, run some analysis, come up with some experiments and rinse and repeat. We wanted to add more and more data sources and so focused on building small services that we could add into our existing "system" (I favour ecosystem as it implies something that emerges, rather than something that was planned; probably moot given almost all planned systems I ever worked on also changed over time).
We'd build services that would provide a façade over a more complex API (AdWords at the time was SOAP and XML downloads only- most of our downstream tools worked with CSV better). When we did this we found we could replace those services more quickly than we could fix any inherent problem. For example, our Google reporting service went from Ruby, to Ruby + a little C, to Clojure + Hadoop, to JRuby. Most of this was driven by the growing size of the datasets and hitting segfaults or other problems we'd prefer not to debug. We could apply the same when we'd break systems apart by more domain concepts- i.e. breaking uSwitch's Energy product into a Comparison service and Tariff Editing- there are domain language boundaries that are made more concrete this way.
We'd strive to keep API compatibility and just replace underlying implementations. When we did this we realised we could change quite a lot of the make-up of a service without the wider system needing to know about it.
We would write automated tests where helpful (lots of conversion/transformation stuff that could get hairy, for example). We'd also use our regular reporting/monitoring tools to know if we did something silly. We'd deploy versions of services on new Amazon instances that would sit under the same load-balancer, monitor performance and behaviour, and then move on- in effect, we'd have competing versions of our services in production at the same time.
We've been doing the same at uSwitch for a while now- breaking an enormous monolithic system into lots more independent pieces. It's more complex (in the sense you can't sit down at a single codebase and start clicking around in IntelliJ) but it's simpler given you can work without needing to worry about the whole- Conway's law has helped us break the monolithic dev team into smaller pieces, allowing people to change all parts of uSwitch without someone centrally planning- it's more curated than controlled now.
Most change is now local (both People and Code). We'd prefer to have slightly larger individual teams to give us more slack but we want to make sure we find the right people. This does mean that the local effects are amplified (i.e. if a person leaves from a 2 person team that might suck to start), but it doesn't really affect the system at large. For example: an ex-colleague wrote a tiny C Ruby Gem to interface with a proprietary C lib we use when people switch their energy. It took a few hours to write some tests and fix it. I was free over the weekend and wrote an API compatible version in Go (mainly to learn Go to be honest)- but we could've deployed it alongside the old version and eventually replaced.
To be clear, almost all services have test suites; really-important-services(tm) have really extensive test suites (our energy comparison service has hundreds and hundreds, as do the services that integrate us with suppliers). However, we have few tests for the composition of services. We just ask people to be mindful that when changing APIs you're likely to ripple into other places but most APIs will have a small number of consumers- cellular OO in the large if you will.
It's definitely context sensitive: a lot of Forward businesses don't solve problems in the same way and some teams within uSwitch don't approach problems in exactly the same way. It's definitely contingent upon the problem you're solving, the people you have, and more.
*switches back to original thread :)*
I'd say we approach building services/apps/tools in a similar vein: we'll use tests to give us confidence we're doing the right things (i.e. can we correctly figure out how many kWh £1500 on British Gas' plan is) but we tend not to write 'story' tests (the kinds of things you see Cucumber being abused for).
When I'm writing Clojure it'll be a mix of writing speculative code in Emacs with a REPL, playing with it in the REPL, and then when it gets tough I'll start adding some tests to help me. When I first started writing Clojure it was for integrating with some Java libs that I wanted to play with to understand- the kind of thing that's much slower when writing sample clients in Java, specifying Maven deps, building etc.
On Tuesday, May 7, 2013 12:26:37 PM UTC+1, Malcolm Sparks wrote:
I agree, it's very context-sensitive.
"Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head"
I really like these 2 ideas, do you have any more info/references? When modules are small and disposable you can re-write them individually, a great 'third-way' to the classic maintain-or-rewrite dilemma.
I think the latter (small enough for one's head) is complementary with the deploy-now-fix-later approach I'm peddling. The problem with writing features before they are needed is that some won't be, and they add to the bloat. Another contribution to the bloat is external configuration because of the myth that 'hard-coding'
values is a bad thing. If you can nrepl-in later and change those hard-coded values in place, then they're no longer hard-coded, and you don't need a complex configuration system.
"The question is how that sustains over time and across people"
That's a great point. I only have personal anecdotal experience to draw upon, but I believe there are many projects where the test-suite has died and the cost of maintaining is comparable to the value it provides (especially if there is no longer full coverage).
I'm a strong proponent of testing, but only when testing has the effect of driving down the cost of change. I've been on too many projects where it's a case of 'quantity over quality' when it comes to unit tests (I've been guilty of that too in my own projects).
Smaller end-to-end test suites give me greater confidence than large cumbersome suites that are basically the accumulation of moth-balled TDD unit-tests. So I think your question about sustaining over time and across people can be asked about traditional 'agile' testing regimes too.
*switches back to original thread*