Test-driven to REPL-driven development

710 views
Skip to first unread message

Andrew Whitehouse

unread,
May 3, 2013, 11:16:20 AM5/3/13
to london-c...@googlegroups.com
Fellow Clojurians,

I am trying to wean myself off IntelliJ and being a long-time vim user am experimenting with vim-fireplace / vim-clojure-static. I have tended to follow TDD so am looking to figure out how to develop effectively in the REPL.

For those of you out there who develop in the REPL, would you be able to share your workflow and experiences? Whereas TDD has a clear process (write failing test / make it work / possibly refactor / repeat), in the REPL I have an incling it is about getting the desired result from applying functions to data (in short increments), i.e. exploring the available APIs to get a specific structure to look the way you want and then generalising that in a function(?).

I (and I expect others) would love to hear experiences of REPL- vs non-REPL development I have some specific questions too, in no particular order:
- Do you start off by def'ing your data structures and then progressively folding these into functions via let bindings ? (Or not?) 
- Do Emacs / vim / any other editors have any tools that make this process easier? 
- In what situations have you found a long-running REPL necessary (vs saving to file and reloading), and are there any common patterns you find yourself following in an extended REPL session? (Do you bother with :reload-all ?  *)
- what does developing in the REPL give you over TDD?
- do you still write tests and if so what is your motivation (catch regressions / "document" expected behaviour?)

- Andrew

https://github.com/clojure/tools.namespace (see section "Reloading code - Motivation")








Simon Katz

unread,
May 3, 2013, 12:13:36 PM5/3/13
to london-c...@googlegroups.com
There was some discussion here a few weeks ago around some of this — see https://groups.google.com/d/msg/london-clojurians/7_T42FX7XYI/ZX_WpBFELEYJ and the ensuing discussion.

Simon

Andrew Whitehouse

unread,
May 3, 2013, 12:25:42 PM5/3/13
to london-c...@googlegroups.com
Hi Simon,

I remember the e-mail discussion; I am hoping that those who favour coding in the REPL (or who have seen it doesn't work for some cases) can share specific practical tips on their process, along with what works and what doesn't.

- Andrew

--
You received this message because you are subscribed to the Google Groups "London Clojurians" group.
To unsubscribe from this group and stop receiving emails from it, send an email to london-clojuri...@googlegroups.com.
To post to this group, send email to london-c...@googlegroups.com.
Visit this group at http://groups.google.com/group/london-clojurians?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Neale Swinnerton

unread,
May 3, 2013, 12:41:36 PM5/3/13
to london-c...@googlegroups.com
There are two phases which seem, in java land at least, to get conflated.

1. Poking an API to learn it's capabilities and behaviour. 
2. Writing tests to prove the correctness of your use of the API.

These are quite different phases, each important in their own way

1. is the 'kicking the tyres', open the bonnet, boot etc when you're looking at car to see if it's what you want to buy.

2. is the MOT, looking at service history, getting an expert to look it over, test drive etc.

In java, there isn't an effective way to do '1', so you end up writing tests to do this, because it's handy, in an IDE, to run code snippets that way. Often these 'type 1' tests hang around longer than they should. I find in java that TDD shapes the design of the application in a positive way. For me one of the primary benefits of TDD is that it forces testability. I like to think more of 'Test Driven Design', than development -  I design it such that it can be tested.

In clojure, because 'noodling in the REPL' is so easy and so much fun, we tend to build programs by creating small functions and composing them together from the bottom up. I find that working in this way, testability comes about almost for free. You get these small bits of functionality that are so easy to reason about that they don't necessarily need extensive tests.

There are some hygiene aspects to working in the REPL. Everyone has been bitten by the function rename/code calls old function type stuff. The tooling around this is improving, but some discipline is still required. Committing to git *all the time* is the way I handle that. I tend to write code in a .clj buffer and evaluate in the REPL. If a function works as expected, I commit it. Often I write tests at this point and commit them too. When everything is working I then re-write the history to a clean set of 'feature' commits to be pushed upstream.

Any help?


Neale Swinnerton
{t: @sw1nn, w: sw1nn.com }


On Fri, May 3, 2013 at 5:13 PM, Simon Katz <nomi...@gmail.com> wrote:

Simon Katz

unread,
May 5, 2013, 6:29:36 AM5/5/13
to london-c...@googlegroups.com
I think it's worth repeating and clarifying a few things I said in this message in the conversation I mentioned earlier:
  • It's important to distinguish between two meanings of "REPL" — one is a window that you type forms into for immediate evaluation; the other is the process that sits behind it and which you can interact with from not only REPL windows but also from editor windows, debugger windows, the program's user interface, etc.
  • It's important to distinguish between REPL-based development and REPL-driven development:
    • REPL-based development doesn't impose an order on what you do.  It can be used with TDD or without TDD.  It can be used with top-down, bottom-up, outside-in and inside-out approaches, and mixtures of them.
    • REPL-driven development seems to be about "noodling in the REPL window" and later moving things across to editor buffers (and so source files) as and when you are happy with things. I think it's fair to say that this is REPL-based development using a series of mini-spikes. I think people are using this with a bottom-up approach, but I suspect it can be used with other approaches too.
      • (But this is something I don't typically do and I may be misunderstanding what people mean by REPL-driven development.)
      • (I prefer to get my thoughts straight into source files, and play with them there.  If I don't like what I have I can delete it.)
I typically run the code I am working on in a few different ways:
  • Before committing I'll run all my tests in a fresh process. (That doesn't mean I have to re-start other processes that are running my program, but it may be a good time to do that.)
  • Using REPL-based development. When I add or change definitions I'll send each one to the REPL process. Those definitions might be data, functions, tests — anything.  I'll also play with the evolving program through its user interface.
    • FWIW, changing macros can be painful because all callers will need to be recompiled if they need the new definition, so if you've changed what the macro does rather than simply how it does it it can be a pain — maybe worth restarting the REPL in such cases.  Ah! maybe https://github.com/clojure/tools.namespace helps here? — Not sure; I need to try it.
  • Recently I've been using Midje's autotest feature via Leiningen.  This creates a process that runs all your tests, and watches for changes to source files re-running tests as necessary.  I have this running in a small always-visible window in the corner of my screen and I check that things stay green when I save my files.  This is very nice, and I'm finding myself bothering less with keeping my own REPL process up-to-date with all my changes.  If my REPL process gets out of date, the refresh function in https://github.com/clojure/tools.namespace is useful.  I've found one problem in Midje autotest with records and protocols that makes me a little wary that when I start to use dark corners of Clojure things may not go well.
See also a few comments interspersed in your message below.

On Friday, 3 May 2013 16:16:20 UTC+1, Andrew Whitehouse wrote:
Fellow Clojurians,

I am trying to wean myself off IntelliJ and being a long-time vim user am experimenting with vim-fireplace / vim-clojure-static. I have tended to follow TDD so am looking to figure out how to develop effectively in the REPL.

For those of you out there who develop in the REPL, would you be able to share your workflow and experiences? Whereas TDD has a clear process (write failing test / make it work / possibly refactor / repeat), in the REPL I have an incling it is about getting the desired result from applying functions to data (in short increments), i.e. exploring the available APIs to get a specific structure to look the way you want and then generalising that in a function(?).

I think you're describing what I'm calling bottom-up REPL-driven development.

You can combine TDD and REPL-based development.

I guess if you start by writing tests in a REPL window and then moving them to source files, you'd have TDD REPL-driven development (but I don't do that — not sure if anyone does).


I (and I expect others) would love to hear experiences of REPL- vs non-REPL development I have some specific questions too, in no particular order:
- Do you start off by def'ing your data structures and then progressively folding these into functions via let bindings ? (Or not?) 
 
- Do Emacs / vim / any other editors have any tools that make this process easier? 

I use Emacs and nrepl; I can send forms from Emacs to nrepl for evaluation.  I think that's about it for me.
 
- In what situations have you found a long-running REPL necessary (vs saving to file and reloading), and are there any common patterns you find yourself following in an extended REPL session? (Do you bother with :reload-all ?  *)

I restart my REPL process frequently if I'm changing names or deleting things. If I'm only adding stuff I sometimes go for days without restarting, but that's unusual because I'm almost always doing small-scale refactorings of newly-added stuff.  Probably my longest-running REPL sessions have been when I've been tracking down a particularly nasty bug and I haven't been changing or adding much.  (Much of what I've just said is from my days with Common Lisp, but I think the same will be true for Clojure.)

I haven't used :reload-all. It seems to be of limited use because of the problems you mentioned described at https://github.com/clojure/tools.namespace#reloading-code-motivation.
 
- what does developing in the REPL give you over TDD?

I guess you're asking "what does bottom-up REPL-driven development give you over TDD".  I'll leave that for others because it's not what I do.

For me:  REPL-based development gives me fast feedback, and you can do REPL-based development with TDD.

- do you still write tests and if so what is your motivation (catch regressions / "document" expected behaviour?)

Yes (yes / yes).

I've heard it suggested that you don't need tests if you're writing Clojure.  That has to be nonsense.  But probably you don't need to test at the low-level that seems typical of many Java systems.  (I'm fairly new to TDD and have been surprised at the low level of testing I've sometimes seen.  I'm not sure yet whether the low-level testing is TDD-done-right or TDD-taken-to-unnecessary-extremes.)

 
- Andrew

https://github.com/clojure/tools.namespace (see section "Reloading code - Motivation")

I hope some of this is helpful.  I'm enjoying getting my thoughts straight on this and trying to understand what other people are doing.

Simon

P.S. I've composed this in a tiny frame in the Google Groups web UI.  I can't see a way to "pop-out" into a larger window.  If anyone knows how please tell me!  This is painful.  I'm sure there used to be a way.

Andrew Whitehouse

unread,
May 6, 2013, 6:25:03 AM5/6/13
to london-c...@googlegroups.com
Neale & Simon,

Excellent responses, which I wanted to spend a couple of days pondering.

When 'noodling' I have so far tended to write a series of defs, as I have observed others do, and then fold these into my functions using let bindings. The let bindings are useful when working with a line-based editor like IntelliJ. However I can see that building the code up from small functions could lead to more expressive code focused on function composition.

i.e. something like this:

(def my-map [ (comment blah) ] )

(defn first-function [m] (comment "some transformation on what's in my-map") )
(defn second-function [m] (comment "another transformation") )
; (first-function my-map)
(-> my-map first-function second-function)

in preference to:

(def m2 (whatever-the-body-of-first-function-does my-map))
(def m3 (comment etc.))

in the REPL, which then becomes:

(defn the-function [m]
   (let [m2 (whatever-the-body-of-first-function-does my-map)
          m3 (comment etc.))]
      (final-function m3)))

(and if I had thought of a concrete example I would have more expressive variable names).

I have looked around for other material covering REPL-based development ...

  • There is also a REPL-Oriented Programming chapter in the O'Reilly Clojure Programming book which talks about techniques for namespace manipulation and defonce (to avoid variables being redefined on :reload). It also makes the points that Clojure's workflow doesn't have to be file-based, and that Clojure's design allows for dynamic redefinition of constructs are runtime.
I found both of these fairly high level and they left me wanting to see more examples.

Back to Neale's point about test-driven design vs test-driven development, Neale Ford did an "Emergent Design" series on developerWorks which I would recommend http://www.ibm.com/developerworks/views/java/libraryview.jsp?search_by=evolutionary+architecture+emergent+design:

I would welcome more opportunities to practise the techniques we are discussing; for those of us not yet using Clojure in our day jobs (mine is a part-time project) I wonder whether the dojo is a good place to focus on this, or perhaps something separate along the lines of the Code and Coffee sessions that the LSCC run before work. 

Other possibilities might be for one of our more experienced members to put together some screencasts showing different ways to code with the REPL, along the lines of the Peepcode Play-by-Play model, or for the LambdaNext guys to incorporate this in their training?

- Andrew

--

Malcolm Sparks

unread,
May 6, 2013, 10:58:04 AM5/6/13
to london-c...@googlegroups.com
Here's my Clojure development process, fwiw :-

1. I usually start with 'lein new', followed by 'lein repl'. Next I connect my Emacs with 'nrepl'. I M-x cust-var nrepl-repl to 6011 and my .lein/profiles.clj is

{:user {:repl-options {:host "0.0.0.0" :port 6011}}
 :dev {:dependencies [[clojure-complete "0.2.2"]]}
 }

I tend not to run multiple nrepl sessions in my Emacs, because my brain isn't big enough, so fixing the port means I don't have to keeping typing it.

2. I get the *nrepl* buffer but I never type anything into it. I don't understand why anybody bothers with it. It doesn't even do paredit properly. Everything I do is inside a .clj buffer, using M-C-x to eval the expressions I would otherwise type into the *nrepl* buffer. I might use the *nrepl* buffer for printlns, and pprints, but that's it.

3. I start with a let binding, which sets up test data and so on, and also functions. This way, I don't have to navigate around and redef stuff, I just hit M-C-x somewhere in the let binding and check the result in the minibuffer.

4. Sometimes I need the result which went into the minibuffer. I can dig it out the *Messages* buffer, or more usually I do C-u M-C-x which inserts the result where the cursor is.

5. When I'm happy that a particular function under development is working, I factor that into a defn. With paredit, that's usually just a C-k and C-y. I try to keep my namespaces having just defn and try not to use defs anywhere - there are global variables after all. While the namespace still has a big ugly let binding in it, I usually comment it out by placing a #_ before it before I commit to git.

6. Like Neale I commit regularly, sometimes just with 'checkpoint' in the message. I git rebase a lot, usually reset to the commit prior to the checkpoints with magit's x key, and then recommit. If I do something really trivial after a single commit, I hit C-c C-a in the Magit commit buffer that does an 'append' commit.

I'm a big fan of nrepl-based development. I've been working recently with Karsten Lang to improve the situation with remote nrepls. Right now, things like nrepl-jump don't work when you're developing against a remote nrepl but we've got fixes for that now. I also have a way of combining remote nrepls with ssh tunnels (which are started emacs), so that remote nrepl development is secure. More coming. Watch this space.

(agile fans stop reading here)

I love remote nrepls. My belief is that we still write too much code in anticipation for situations that never arise. Reserving the right to add a feature or fix a bug via remote nrepl is a technique for further flattening the cost-of-change curve - ie. the tendency of the cost of a feature to increase with the age of a project. If you can do a feature later, for the same cost, then don't do it now. This is the very basis of agile (first chapter of the first book of the first agile methodology). Current development approaches try to build in too much quality up-front (yikes, I can hear those swords being sharpened...) - most systems we build aren't life-and-death, and if you can fix a bug at a moment's notice with zero downtime then why bother with all this testing?!! I'm currently building a system called 'up' (https://github.com/malcolmsparks/up) that builds upon this notion of 'deploy now, fix later' (DNFL).

Malcolm






Steve Freeman

unread,
May 7, 2013, 4:55:57 AM5/7/13
to london-c...@googlegroups.com
On 6 May 2013, at 15:58, Malcolm Sparks wrote:
> (agile fans stop reading here)

oops, stepped over the mark...

This was Paul Graham's experience with his start-up, at least one famous Smalltalk system, and of course all of APL.

Of course any particular fix can be done on the spot and a better language makes it so much easier. The question is how that sustains over time and across people. One of the most important features of a high-coverage code base is the confidence to make changes without risk of accidental damage. There are other ways to achieve that, such as Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head. It's very context-sensitive. I also wonder if most of the systems we work on require such rapid turnaround.

I don't have the relevant experience here, so I'm just prodding.

S.

Malcolm Sparks

unread,
May 7, 2013, 7:26:37 AM5/7/13
to london-c...@googlegroups.com
I agree, it's very context-sensitive.

"Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head"

I really like these 2 ideas, do you have any more info/references? When modules are small and disposable you can re-write them individually, a great 'third-way' to the classic maintain-or-rewrite dilemma.

I think the latter (small enough for one's head) is complementary with the deploy-now-fix-later approach I'm peddling. The problem with writing features before they are needed is that some won't be, and they add to the bloat. Another contribution to the bloat is external configuration because of the myth that 'hard-coding'
values is a bad thing. If you can nrepl-in later and change those hard-coded values in place, then they're no longer hard-coded, and you don't need a complex configuration system.


"The question is how that sustains over time and across people"

That's a great point. I only have personal anecdotal experience to draw upon, but I believe there are many projects where the test-suite has died and the cost of maintaining is comparable to the value it provides (especially if there is no longer full coverage).

I'm a strong proponent of testing, but only when testing has the effect of driving down the cost of change. I've been on too many projects where it's a case of 'quantity over quality' when it comes to unit tests (I've been guilty of that too in my own projects).

Smaller end-to-end test suites give me greater confidence than large cumbersome suites that are basically the accumulation of moth-balled TDD unit-tests. So I think your question about sustaining over time and across people can be asked about traditional 'agile' testing regimes too.








Paul Ingles

unread,
May 7, 2013, 8:01:13 AM5/7/13
to london-c...@googlegroups.com
"Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head"
 
 
I really like these 2 ideas, do you have any more info/references?

I'll try and add a little- I work for Forward (currently within uSwitch) and a lot of what we do evolved out of the way we approached problems when we were TrafficBroker (a small-ish paid search agency with an emphasis on tech).

In TrafficBroker a lot of what we did was to capture data, run some analysis, come up with some experiments and rinse and repeat. We wanted to add more and more data sources and so focused on building small services that we could add into our existing "system" (I favour ecosystem as it implies something that emerges, rather than something that was planned; probably moot given almost all planned systems I ever worked on also changed over time).

We'd build services that would provide a façade over a more complex API (AdWords at the time was SOAP and XML downloads only- most of our downstream tools worked with CSV better). When we did this we found we could replace those services more quickly than we could fix any inherent problem. For example, our Google reporting service went from Ruby, to Ruby + a little C, to Clojure + Hadoop, to JRuby. Most of this was driven by the growing size of the datasets and hitting segfaults or other problems we'd prefer not to debug. We could apply the same when we'd break systems apart by more domain concepts- i.e. breaking uSwitch's Energy product into a Comparison service and Tariff Editing- there are domain language boundaries that are made more concrete this way.

We'd strive to keep API compatibility and just replace underlying implementations. When we did this we realised we could change quite a lot of the make-up of a service without the wider system needing to know about it.

We would write automated tests where helpful (lots of conversion/transformation stuff that could get hairy, for example). We'd also use our regular reporting/monitoring tools to know if we did something silly. We'd deploy versions of services on new Amazon instances that would sit under the same load-balancer, monitor performance and behaviour, and then move on- in effect, we'd have competing versions of our services in production at the same time.

We've been doing the same at uSwitch for a while now- breaking an enormous monolithic system into lots more independent pieces. It's more complex (in the sense you can't sit down at a single codebase and start clicking around in IntelliJ) but it's simpler given you can work without needing to worry about the whole- Conway's law has helped us break the monolithic dev team into smaller pieces, allowing people to change all parts of uSwitch without someone centrally planning- it's more curated than controlled now.

Most change is now local (both People and Code). We'd prefer to have slightly larger individual teams to give us more slack but we want to make sure we find the right people. This does mean that the local effects are amplified (i.e. if a person leaves from a 2 person team that might suck to start), but it doesn't really affect the system at large. For example: an ex-colleague wrote a tiny C Ruby Gem to interface with a proprietary C lib we use when people switch their energy. It took a few hours to write some tests and fix it. I was free over the weekend and wrote an API compatible version in Go (mainly to learn Go to be honest)- but we could've deployed it alongside the old version and eventually replaced. 

To be clear, almost all services have test suites; really-important-services(tm) have really extensive test suites (our energy comparison service has hundreds and hundreds, as do the services that integrate us with suppliers). However, we have few tests for the composition of services. We just ask people to be mindful that when changing APIs you're likely to ripple into other places but most APIs will have a small number of consumers- cellular OO in the large if you will.

It's definitely context sensitive: a lot of Forward businesses don't solve problems in the same way and some teams within uSwitch don't approach problems in exactly the same way. It's definitely contingent upon the problem you're solving, the people you have, and more.

*switches back to original thread :)*

I'd say we approach building services/apps/tools in a similar vein: we'll use tests to give us confidence we're doing the right things (i.e. can we correctly figure out how many kWh £1500 on British Gas' plan is) but we tend not to write 'story' tests (the kinds of things you see Cucumber being abused for). 

When I'm writing Clojure it'll be a mix of writing speculative code in Emacs with a REPL, playing with it in the REPL, and then when it gets tough I'll start adding some tests to help me. When I first started writing Clojure it was for integrating with some Java libs that I wanted to play with to understand- the kind of thing that's much slower when writing sample clients in Java, specifying Maven deps, building etc.





On Tuesday, May 7, 2013 12:26:37 PM UTC+1, Malcolm Sparks wrote:
I agree, it's very context-sensitive.

"Forward structuring its code into safe, disposable units, or making a system small enough to fit into one person's head"

I really like these 2 ideas, do you have any more info/references? When modules are small and disposable you can re-write them individually, a great 'third-way' to the classic maintain-or-rewrite dilemma.

I think the latter (small enough for one's head) is complementary with the deploy-now-fix-later approach I'm peddling. The problem with writing features before they are needed is that some won't be, and they add to the bloat. Another contribution to the bloat is external configuration because of the myth that 'hard-coding'
values is a bad thing. If you can nrepl-in later and change those hard-coded values in place, then they're no longer hard-coded, and you don't need a complex configuration system.

"The question is how that sustains over time and across people"

That's a great point. I only have personal anecdotal experience to draw upon, but I believe there are many projects where the test-suite has died and the cost of maintaining is comparable to the value it provides (especially if there is no longer full coverage).

I'm a strong proponent of testing, but only when testing has the effect of driving down the cost of change. I've been on too many projects where it's a case of 'quantity over quality' when it comes to unit tests (I've been guilty of that too in my own projects).

Smaller end-to-end test suites give me greater confidence than large cumbersome suites that are basically the accumulation of moth-balled TDD unit-tests. So I think your question about sustaining over time and across people can be asked about traditional 'agile' testing regimes too.

*switches back to original thread* 

Tristan Mills

unread,
May 7, 2013, 8:30:54 AM5/7/13
to london-c...@googlegroups.com
Hi there,

Its been a while since I could do much clojure development, but my experience is as follows:

I use emacs (vim should be similar) and would practice TDD (as I do in Java). But I'd also find myself 'noodling around in the repl' (ie, the command line like one) to try things out.

This actually mirrors my Java dev process. I will often write scratch 'tests'  and code to try things out, but that gets thrown away and I enter TDD proper. The REPL just means that my experimentation is far more interactive and quicker and can happen at any point (want to check something? Play in the REPL, then capture that in a test if it seems necessary).

Naturally this isn't hard and fast, but I think its important in any language to guide your code with tests and experimentation. The tests ensure that your refactoring doesn't break anything and helps separate out concerns (which is far easier in clojure given the tendency to write lots of small functions and compose them - Java seems to tempt you to lump lots of things together which don't belong together).

I am a massive proponent of TDD (not just having tests) and see no reason not to practice it in clojure - the only issue is that the tooling is less mature and we're still looking for best practices for functional styles of programming.

Tristan
 


Bruce Durling

unread,
May 7, 2013, 9:31:40 AM5/7/13
to London Clojurians
Fellow Clojurians,

On Tue, May 7, 2013 at 1:01 PM, Paul Ingles <pa...@oobaloo.co.uk> wrote:
> When I'm writing Clojure it'll be a mix of writing speculative code in Emacs
> with a REPL, playing with it in the REPL, and then when it gets tough I'll
> start adding some tests to help me.

This is basically what I do.

I think the only thing I get picky about is the names I create. I only
use throwaway names in the repl. If a name changes (renaming a
function or changing the signature) I always restart the repl as I
want to make sure I'm not caught out by anything.

If I'm running lein midje :autotest I'll bounce that too if I change a
function signature or name. I don't want to get caught out when I
deploy an uberjar to a remote machine.

cheers,
Bruce


--
@otfrom | CTO & co-founder @MastodonC | mastodonc.com
See recent coverage of us in the Economist http://econ.st/WeTd2i and
the Financial Times http://on.ft.com/T154BA

Steve Freeman

unread,
May 7, 2013, 9:36:37 AM5/7/13
to london-c...@googlegroups.com
Top discussion, chaps (and a potential blog post?).

At one level, I think what pingles is describing is closer to some original conceptions of OO, rather than the C++ classification mess most people get.

My next questions is about managing complexity *between* services, since it's not within them any more. I'm guessing that Forward doesn't have so many that it's hard to keep track, plus it's mostly internal so it is actually possible to find all the clients.

S

On 7 May 2013, at 13:01, Paul Ingles wrote:
>> "Forward structuring its code into safe, disposable units, or making a
>> system small enough to fit into one person's head"
>
> I really like these 2 ideas, do you have any more info/references?
>
>
> I'll try and add a little- I work for Forward (currently within uSwitch)
> and a lot of what we do evolved out of the way we approached problems when
> we were TrafficBroker (a small-ish paid search agency with an emphasis on
> tech).
> [...]

Steve Freeman

unread,
May 7, 2013, 9:38:00 AM5/7/13
to london-c...@googlegroups.com
Likewise when I'm working in Python (am I allowed to say that on this list?). I've come across Java devs who've never experience the value of a REPL.

S

Bruce Durling

unread,
May 7, 2013, 9:42:19 AM5/7/13
to London Clojurians
The best thing about using the old JDEE was having a beanshell repl.

cheers,
Bruce
> --
> You received this message because you are subscribed to the Google Groups "London Clojurians" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to london-clojuri...@googlegroups.com.
> To post to this group, send email to london-c...@googlegroups.com.
> Visit this group at http://groups.google.com/group/london-clojurians?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>



Paul Ingles

unread,
May 7, 2013, 9:43:38 AM5/7/13
to london-c...@googlegroups.com
When I was at TW a colleague joined from a game dev background and showed me how he used IronPython to help him kick the tyres of this horrendous .NET app we were patching at a bank. The app would take 45mins to boot after loading the billion data sources it needed.

At the time I'd only come across REPLs when I'd been learning Ruby (that and the immediate windows in the various IDE debuggers). It was a massive eye-opener to see it being used against an enterprise app.

He'd also use Powershell a lot which although cool I never quite got my head around.


On Tue, May 7, 2013 at 2:38 PM, Steve Freeman <st...@m3p.co.uk> wrote:

Paul Ingles

unread,
May 7, 2013, 10:43:08 AM5/7/13
to london-c...@googlegroups.com
Top discussion, chaps (and a potential blog post?). 

I think I may have started a few posts in draft but never finished- I find it really interesting but very difficult to describe much without it reducing to "it depends" [1]

I gave a presentation at NoSQL Exchange related to this theme- video and slides available here:



1) but then I guess almost everything ends up this way too? :) 

Steve Freeman

unread,
May 7, 2013, 11:16:06 AM5/7/13
to london-c...@googlegroups.com
On 7 May 2013, at 12:26, Malcolm Sparks wrote:
> I think the latter (small enough for one's head) is complementary with the
> deploy-now-fix-later approach I'm peddling. The problem with writing
> features before they are needed is that some won't be, and they add to the
> bloat. Another contribution to the bloat is external configuration because
> of the myth that 'hard-coding' values is a bad thing.

Agreed with both of those, but I would hope to see neither problem in a properly "agile" project.

> If you can nrepl-in later and change those hard-coded values in place, then
> they're no longer hard-coded, and you don't need a complex configuration system.

until you bounce the system? And who knows about it?

> That's a great point. I only have personal anecdotal experience to draw
> upon, but I believe there are many projects where the test-suite has died
> and the cost of maintaining is comparable to the value it provides
> (especially if there is no longer full coverage).

+1 (we have a workshop about that :)

> I'm a strong proponent of testing, but only when testing has the effect of
> driving down the cost of change. I've been on too many projects where it's
> a case of 'quantity over quality' when it comes to unit tests (I've been
> guilty of that too in my own projects).

we never promised "easy" :)

> Smaller end-to-end test suites give me greater confidence than large
> cumbersome suites that are basically the accumulation of moth-balled TDD
> unit-tests. So I think your question about sustaining over time and across
> people can be asked about traditional 'agile' testing regimes too.

of course. That said, one coder I know talked about how it was like someone sent them a message from the past when a test failed. If nothing else, it showed that someone once cared...

S

Renzo Borgatti

unread,
May 11, 2013, 9:04:31 AM5/11/13
to london-c...@googlegroups.com
I'm somehow late in this conversation but I'd like to add my opinion. My experience with Clojure is still at the beginning, but I'm surprised nobody has mentioned the Midje approach, first learned from http://vimeo.com/19404746 then summarized here https://github.com/marick/Midje/wiki/The-idea-behind-top-down-development.

I definitely don't see that as the *only* possible approach, but it fits perfectly with my way of thinking top-down in most of the situations. The REPL is an useful tool to test out possible implementations once the top-down approach has identified what "nouns" or "verbs" should be defined next. Sometimes I need to break the cycle (that is, switch to bottom up) when the domain is so complicated that I can't clearly figure out what conversations between lower level functions should happen to achieve the upper level goal. But I have the impression that is just my issue and I was able to see an improvement over time in how I can compose a problem at some layer(n) of abstraction using what is going to be developed as the layer(n-1) terminology.

So in that sense I use the REPL exactly like I used to do with the Ruby IRB during my Rails years. In the Ruby community there was never such a discussion about replacing TDD/BDD with a non-testing at the bottom in IRB and some test after to increase confidence (that is, if I'm not wrong, the approach that was labeled REPL-driven development in this thread). The REPL was and still is an incredible useful tool to test out ideas with the quickest feedback cycle.

Really nobody else here is using bottom-down TDD with mocks a-la Midje?

Cheers
Renzo

Robert

unread,
May 11, 2013, 9:22:37 AM5/11/13
to london-c...@googlegroups.com
I think there is a massive difference between Ruby and Clojure due to the way that state is handled. If you use referential transparency then behaviour at the REPL is the same as when the program is operating. In Ruby everything is mutable and you have extensive metaprogramming so I think you need more conventional tests.

I personal think that you shouldn't need to use mocks in Clojure testing since if you are composing functions you should be able to pass a stub value or function to achieve what you would have mocked in an OO language using a collaborators/injection design.

Renzo Borgatti

unread,
May 11, 2013, 12:12:58 PM5/11/13
to london-c...@googlegroups.com
Hello,

On 11 May 2013, at 14:22, Robert <shudd...@gmail.com> wrote:

> I think there is a massive difference between Ruby and Clojure due to the way that state is handled. If you use referential transparency then behaviour at the REPL is the same as when the program is operating. In Ruby everything is mutable and you have extensive metaprogramming so I think you need more conventional tests.

There are different kind of complexities. The one we are usually talking about with OO compared to FP is managing mutable state in a world of concurrency. Although FP gives us referenetial transparency and simplification for concurrency and parallelisation and better readeability, I'm not sure it's not complicateded in other ways. Anyway we are human and we make mistakes in simple or complex systems in the same way. If it's not clear alrady, I don't trust myself more in Clojure or in any other language.

You also seems to forget about the aspect of TDD I'm more interested in. There are many ways to functional decompose a problem. What is your process to select one decomposition over another? Because it really matters. Top down TDD allows me to think that upfront, when complexity is still under control.

Trying once in the REPL is not enough guarantee protection from regressions. It is unlikely you'll never change that code again.

> I personal think that you shouldn't need to use mocks in Clojure testing since if you are composing functions you should be able to pass a stub value or function to achieve what you would have mocked in an OO language using a collaborators/injection design.

I probably confused the terminology, I was talking about stubbing (not invocation verification). If you look at the approach, there is no other way to achieve top down red, green, refactor without stubbing. You'll end up implementing the entire thing upfront on a red test.

Cheers
Renzo

Steve Freeman

unread,
May 12, 2013, 4:13:40 AM5/12/13
to london-c...@googlegroups.com
(caveat, not actually tried in production).

Midje as commonly explained, doesn't quite do what we try to do in the OO world. In particular, my response to a need to substitute an implementation by stealth is to wonder if I should be passing in a new collaborator. In the Clojure world, I think that might be a partial function but that might be a bad idea. Much of the point of TDD is to use the feedback from tests to change the code, midje (and its OO equivalents) reduce that tension.

A couple of weeks ago I was looking at someone's Clojure code which included some i/o, and it struck me how static the code actually was--it felt more like Pascal with better collection handling. Importing a functions from a namespace is a static dependency, so I need something like midje to break it. If I pass the relevant behaviour in, then I have more flexibility (and maybe more complexity). If I have code that really is functional, then it's even easier to test.

Thinking ill-formed as ever...

S.

Chris Ford

unread,
May 12, 2013, 2:26:21 PM5/12/13
to london-c...@googlegroups.com
I've had similar thoughts. Brian Marick kindly blogged his thoughts when I asked him about it:


I'm still struggling with it, but if I understand his position correctly, it's that if your tests force you to make your concepts explicit, you get (some of) the design payoff without necessarily having to make the lower-level parts atomically reusable in production code.

Chris


Steve Freeman

unread,
May 12, 2013, 4:12:39 PM5/12/13
to london-c...@googlegroups.com
Brian is very likely to have good ideas. 

Last time it took us quite a while to gain enough experience to understand the techniques, plus a few dead ends that are still widely used. Further research needed...

S. 

Sent from a device without a keyboard. Please excuse brevity, errors, and embarrassing autocorrections. 

Julian Birch

unread,
May 12, 2013, 4:34:59 PM5/12/13
to london-c...@googlegroups.com
There's a fair bit of Clojure code out there that's more static than it should be, imo.  There's a tension between clarity and flexibility, and all too often in Clojure people assume that flexibility will be achieved through the fact that functions are vars, but that only works with's  thread-bound computation.  Stuart Sierra wrote a good article on the with-resources anti-pattern


J


Robert

unread,
May 12, 2013, 6:01:11 PM5/12/13
to london-c...@googlegroups.com
I am the kind of (dreaded?) person who creates foo2 rather than re-writing foo in-place so I haven't found myself doing much revisiting of written code compared to changing my threaded pipelines and creating replacement versions of old functions.

Where I've been contributing to codebases with tests I've tended to write a few tests upfront that specify all the behaviour the function should have and if I'm being strict about it I start developing from the function name does not exist error. This does mean I get some invalid test specifications from time to time because I don't spend enough time in the test suite and forget how is or its equivalent works.

Robert

unread,
May 12, 2013, 6:09:12 PM5/12/13
to london-c...@googlegroups.com
We're at the edge of my understanding at this point but in terms Stuart's post I would say that I definitely prefer library code that takes resources as parameters so I'm glad there is a theoretical reason for that being good.

I'm not sure that referencing a namespace in another namespace is "static" since you're only binding an identity in a context and not definitively but I feel like I've lost the thread now.

Chris Ford

unread,
May 13, 2013, 2:15:06 AM5/13/13
to london-c...@googlegroups.com
My take is that referencing a namespace is "static" in the sense that you're depending directly on all the implementations in that namespace. Once you've made your choice, you can't swap out that implementation for others.

Directly referencing and invoking ordinary fns is quite similar to using static methods (including constructors) in Java. Mr Sierra's approach is similar to classic OO dependency injection, because the protocols his resources implement provide polymorphism.

Of course, you could also use multimethods for polymorphism, but that seems less common nowadays.

Chris

Renzo Borgatti

unread,
May 13, 2013, 5:44:42 AM5/13/13
to london-c...@googlegroups.com
Based on my understanding, Midje is just inspired by its OO friends but it ends up working differently. Some observations in particular:

- the goal is pure abstraction discovery. The focus is mainly on stubbing to allow top-down development. Interaction verification (where you would verify something has been called 3 times) is less of a need in an immutable world.
- the process removes all stubs by provididing the final implementation of all the initially stubbed functions. Once the code is written all the "provided" should be gone and the "unfinished" section empty.
- there is no need to inject a collaborator in a function and in general no notion of collaborator as intended in the OO space. Namespaces should not be intended as objects, they are just grouping artifacts. There are instead collaborating functions (one function calls n-other functions and so on) and stubbing those functions is what Midje provides.
- The process is not intended to be cross-namespaces (at least I wouldn't use it that way), all functions are generated in the current namespace and when there are definite clusters of cohesive functions, that is a refactoring that can happen after the code is written.

Renzo

Bruce Durling

unread,
May 13, 2013, 7:00:08 AM5/13/13
to London Clojurians
I'm actually OK with "statically" depending on particular things in my
*applications*. I agree with Mr Sierra about keeping dependencies out
of libraries as discussed in this thread on the main clojure list:

https://groups.google.com/forum/?fromgroups=#!topic/clojure/WuS31RSiz_A

I find it interesting that he prefers violating DRY to avoid maven
dependency hell.

I think quite often we try to make our applications, and code in
general, overly flexible. I've found my clojure code to be quite
plastic and I'd rather make the changes in code rather than in
something very code like that we call "configuration"

cheers,
Bruce

Jennifer Smith

unread,
May 13, 2013, 7:30:57 AM5/13/13
to london-c...@googlegroups.com
 
I think quite often we try to make our applications, and code in
general, overly flexible. I've found my clojure code to be quite
plastic and I'd rather make the changes in code rather than in
something very code like that we call "configuration"

Agree! A lot of justification for making our apps and code flexible is avoiding the dreaded cost of change. In other languages, the penalty for spattering dependency on a library (or just dependencies) across the entire codebase is pain when you need to change it. It's good that we consider these things when writing clojure (rather than saying "we don't need no stinkin' DI"), but we need to review what theses ideals mean in context.

If it came down to it, static dependencies can easily be shimmed- if you did need to swap out that json library, you could create a wrapper that just supplies the fns used by your application. It's a stop gap, but an achieveable one *. That and monitoring your dependency on libraries - grep for json/ and see how much penetration that json library has into your application.

As a rule, this is why I have started to segment where I use "use" and where I use "require". Sets of standalone/pure functions (like parse-json etc) I tend to favour using require with a generic :as binding (I guess this is something of what Robert is talking about). Use, more when I am depending on core "app code". If I think something *should* be a library I often dont go to the bother of making it so, I just pull out the dependency into its own module. Later on I can go back and make it into a library if I need to.

I think the notion of how you manage state is a lot more subjective. I guess I am saying "it's up to you". All I ask is that a library gives me the choice of how to manage state. If I am going to make a decision/mistake to use global vars I rather prefer this to be a choice for me to make, not the library I depend on. 

* In fact I am 90% sure I did this swap between clj-json and chesire - I just changed the namespace and I was done. Possibly because one is a rewrite/reimagining of the other.

Julian Birch

unread,
May 13, 2013, 3:13:42 PM5/13/13
to london-c...@googlegroups.com
More's the pity.  Multi-methods are superior to protocols in every sense except performance.  But it's easier to translate OOP thinking directly to protocols.  (David Nolen is apparently planning to have another bash at predicate dispatch, which will hopefully be more flexible and faster than current multimethods.)

Agree about libraries vs applications.  Public OSS libraries need to lean in favour of leaving the decisions to the library users.  Applications need to make decisions and shouldn't muck around with unneeded flexibility.

I seem to recall someone saying that the namespace stuff in Clojure was done in a hurry and wasn't entirely satisfactory.  I do look at node's require with envy sometimes (only sometimes).

The thing that I keep hitting with Clojure is that since you put all of your dependencies in parameters, and you very rarely use classes as bags of functionality, I keep hitting a tension between passing in enough information for rich functionality and passing in as little data as possible.  Not helped by the fact I regard five parameters to a function as a code smell.  Of course, it's a lot easier to just put something on the back burner because you don't like the design when it's a hobby project.

Stuart Sierra isn't alone in thinking that dependency management trumps dry.  Udi Dahan's article in 97 things says the same thing.

Well, that was a bit wide-ranging and unfocussed.

Julian
Reply all
Reply to author
Forward
0 new messages