Linking technical change to business objectives

28 views
Skip to first unread message

Johan Martinsson

unread,
Feb 11, 2019, 9:33:31 PM2/11/19
to Lonely Coaches Sodality
In technical agile coaching, there's the style where we teach through demos or let the team teach themselves through dojos and mobbing. 
For instance Lewellyn Falco explains his typical day in a video.

Personally I find it important to frame this training in a business/results dimension. I perfectly understand that's not everyone's choice and it probably has drawbacks. But let me first explain why It's important to me.
  • All too often teams end up being reassured that everything is well because code coverage is X%, ignoring the fact that bugs reappear and that code quality is low again. 
  • Change takes energy and energy is replenished when we see progress
  • If management/marketing/support or whatever other stakeholder don't see any results they'll put back pressure in an unsane way, but if they'll do see progress they'll encourage the team to continue.
  • The team might end up refactoring code that isn't changed
  • The team might end up writing tests for existing code that won't enable refactoring and discover the mistake late.
  • They have no good feedback for experimenting to improve their practices.
Overall I'm convinced that to go as far as possible on the quality path we have to make (almost) every step valuable to business. I did find some answers in this old thread although not specifically on technical coaching. I've also had some success with defining the goal of the coaching, and the indicators that are supposed to predict improvement. A few examples from my experience
  • bugs in QA + code complexity + code feels simpler
  • Bugs in production.
  • Delivery twice as frequently + not more bugs in production
  • Time to validate a feature
What are your thoughts, experiences and suggestions? If you don't do this, why not?

Thanks
Johan

Wouter Lagerweij

unread,
Feb 11, 2019, 10:30:56 PM2/11/19
to lonely-coac...@googlegroups.com
I'm not completely sure what the intent of the question is. Are you offering an alternative to demos/dojo/mobbing?

For me, there's situations where I might use metrics to guide an engagement, and situations where I wouldn't. And the situation, and thus the goals and metrics, might very well change along the way. Life tends to not be simple.

Some organisations are sensitive to these kind of incentives, some are not. Some might be far away from being able to measure even a baseline. Some have such a tendency to misuse metrics that I wouldn't want to increase the temptation. For many, better skills in the teams might not translate into better results because the bottleneck is elsewhere. 

Wouter

--

---
You received this message because you are subscribed to the Google Groups "Lonely Coaches Sodality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lonely-coaches-so...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--

Johan Martinsson

unread,
Feb 12, 2019, 3:18:53 AM2/12/19
to Lonely Coaches Sodality
I'm not completely sure what the intent of the question is. Are you offering an alternative to demos/dojo/mobbing?

Frankly I'm a bit lost. Demos/dojos/mobbing are the What and How to me those are the core activities indeed. Now I'm one of those people that need a Why to work effectively, so I'm constantly looking for a good (often changing) reason. To me, not stating a business oriented Why seems almost outright dangerous. I certainly have had a share of failures, that I think could have been avoided. The problem is that I find very few people who do this, in particular among technical coaches so the more I look for guidance and experiences I can learn from, the more I worry that I'm missing something important. Yet I feel so strongly that getting better at understanding and linking good technical practices to business outcome is the most promising way of spreading them.
 
To be clear I'm looking for
- examples where you've successfully linked tech coaching to changes in the global picture. 
- examples where you've avoided this or should've avoided it (like the example you gave Wouter)
- who else feels strongly about this?

Charlie Poole

unread,
Feb 12, 2019, 6:22:43 AM2/12/19
to lonely-coac...@googlegroups.com
Not sure if I can answer in your terms. I'll explain and try anyway...

Before I was a coach, I was a technical consultant. My value to the
teams I worked on was that I knew more about some technology than
anyone else who worked there. So that's what I think of when you say
"technical." As a coach, I never dealt with technology except in a
very general sense of ensuring that the team had an adequate technical
basis in any technology they wanted to depend on. I have never thought
of things like the XP or Scrum practices as being "technical" at all.
Of course, when it gets down to implementation in a particular
language or platform it gets technical. There are patterns of testing,
for example that have to be implemented differently in C# as opposed
to Javascript.

I see all the stuff we work on as falling under some general headings...
* Why are we doing it (i.e. the business or other need we are
trying to provide for).
* What are we doing (the stories that the software supports)
* How are we doing it
* Technical practices (languages, design patterns, etc.)
* Non-technical practices (e.g. All the XP practices)

I think a team needs to know about all of these areas. That doesn't
mean, at least for me, that each individual item under "Why" will map
one-to-one with any particular technical practice. I thnk we often
have to work on the three levels of Why, What and How separately.

You asked for anecdotes. Because I'm as old as dirt, some people may
have heard this before, but here goes...

A team was working on software to allow the creation of online
questionnaires that collected demographic info and interests from
subscribers to a group of magazines. They spent a lot of time seeking
help about whether a certain type of question was important, how it
was best structured, etc. But in a team discussion, I was a bit
surprised to discover that nobody on the team had the slightest idea
of why the company wanted this information... what was it good for?

The company was publisher of multiple consumer-oriented print
magazines, by the way. I could have given them a reasonable
explanation myself. Instead, I talked to an executive in the company
telling him "You know, nobody has ever explained to your programmers
what value this information has to the company." He was aghast! Not
that they didn't know on their own, but that nobody had ever told them
anything about how the company worked, how it made it's money, etc.
(Interestingly, as a high-level manager, he had no doubt at all that
they __needed__ this sort of background to do a good job.)

The upshot was that this same executive acted as a trainer for a
half-hour, explaining how the company worked and specifically how
advertising rates were determined by the demographics of subscribers.
I noticed that the developers began to ask much more informed
questions about the requirements and even suggested things once they
knew why they were being asked to do things.

This seems very much rudimentary to me. One of the key jobs of any
coach or consultant is to make people who don't normally talk to one
another start doing so.

OTOH, I can't see it tying into any specific practice.

Johan Martinsson

unread,
Feb 13, 2019, 5:10:19 AM2/13/19
to Lonely Coaches Sodality
Thank you Charlie for your thoughts. Could you expand on what you mean by "three levels of Why, What and How"

Charlie Poole

unread,
Feb 13, 2019, 5:32:45 AM2/13/19
to lonely-coac...@googlegroups.com
Sure... Using my anecdote as an example...

WHAT we are doing... at a high level, creating a system for creating
multiple questionaires. Obviously, the WHAT can be subdivided into
multiple stories, for example "User is able to create a questionaire
entry in the form of a multiple choice question with two to six
choices." As I'm using the terms, the sub (and sub-sub and
sub-sub-sub) divisions are also part of WHAT.

HOW has actually two parts. We are doing this with certain technology
(e.g. an Asp .NET Core app in C#) and also using certain techniques in
our own work (TDD for example).

I saved WHY for last because it's the focus of both your question and
my example. WHY is the company doing this? One (the major) aspect is
that the company makes money from advertisers and, as is normal in the
industry, sets rates according to it's ability to match the
demographics of its subscribers to the demographic that each
advertiser is trying to reach.

Said differently, HOW is the traditional domain of a software
development team. WHAT is something that Agile (esp. IMO XP) has
taught us to focus on very strongly. WHY is somewhat neglected,
because it's the role of the Customer or Product Owner to simply tell
the team WHAT will give value. WHY explains the reasoning behind the
choices made by the customer with the idea that this context will (or
at least may) prove useful to all the team members. In practice, it
almost always does.

To be sure, all of the above is an over-simplification. WHAT, WHY and
HOW exist at multiple levels. The WHY of one level may become the WHAT
of a higher level and the HOW may become the WHAT of a lower level.
This ties in with the traditional "Five Whys" pattern of thinking.

Hope that clarifies more than it confuses!

Charlie

PS: In many traditional organizations, asking "Why?" may lead you to a
point where the answer is some form of "Because I say so!" This is an
organizational anti-pattern and can serve as a hint that we are tring
to sow our agile seed on unfertile ground.

J. B. Rainsberger

unread,
Feb 13, 2019, 8:57:37 AM2/13/19
to lonely-coac...@googlegroups.com
On Mon, Feb 11, 2019 at 10:33 PM Johan Martinsson <martinss...@gmail.com> wrote:
 
Overall I'm convinced that to go as far as possible on the quality path we have to make (almost) every step valuable to business. I did find some answers in this old thread although not specifically on technical coaching. I've also had some success with defining the goal of the coaching, and the indicators that are supposed to predict improvement. A few examples from my experience
  • bugs in QA + code complexity + code feels simpler
  • Bugs in production.
  • Delivery twice as frequently + not more bugs in production
  • Time to validate a feature
What are your thoughts, experiences and suggestions? If you don't do this, why not?

I have been trying to do this for a decade now, and I frankly don't know whether I'm making any real progress. My primary strategy has revolved around using Theory of Constraints/Just in Time to coach teams, emphasizing that we are trying to improve in areas that seem likely to provide the most immediate value to the business. I hope that it helps to show everyone that I'm trying to address real problems and not merely pushing pre-packaged solutions to technical problems in my comfort zone. I do this to try to build trust with everyone, especially managers and even executives.

Often I find myself in the situation where the people in my sphere of influence feel stuck optimizing locally. I advise them against it, but instead encourage them to see their lack of influence as an opportunity to build spare capacity without needing to exploit it. This way, we can focus on improving the overall technical skill of the group without worrying as much about what they specifically produce. For the managers/executives who understand ToC, this works very well, but for the others, I have to decide whether to follow them into the trap of optimizing locally. If they're going to throw money out of the window, I might as well catch it. In that case, I add some stress-reduction techniques, so that as they "do the wrong thing", at least I can help them reduce the risk of burning themselves out in the process.

Now, assuming that either (1) we agree that programming is their bottleneck or (2) they believe that they have to produce better results from programming and I can't convince them otherwise, I try at least to apply the ToC ideas within the smaller system. I can't stop them from optimizing locally if they decide that they need to do it. I decide in this case to help them, then hope that they realize it's a bad idea before they do too much damage. I emphasize finding local bottlenecks, cutting feedback loops in half, measuring cycle time over person-hours of effort, that kind of thing.

Although I don't feel comfortable predicting improvement precisely ("I expect 60%-80% reduction in defects"), I continuously describe the outcomes that I expect from a change in work process and I try to frame them in direct business terms, or in the worst case, in ToC terms. I find that once we agree on the scope of the (working) system we're analyzing and the basic principles of throughput accounting and bottlenecks, it becomes clear which intermediate goals make sense and which don't. We also often have open discussions about the difference between improving towards a specific business-oriented goal and improving towards increasing spare capacity (like buying a lottery ticket). When we state our intentions more clearly, it becomes easier to justify various experiments, and when conflicts arise, it becomes easier to justify our decisions about the priority of those experiments. ("Let's stop TDD here for now, because suddenly we need to focus on just automating regression tests for this part of the system, and we'll worry about the design later.")

So I present ToC as a way to figure out which goals make sense for an engagement, as well as a framework for deciding which specific interventions to consider next. For some, ToC sounds "business-y" enough to make it clear that I'm trying to deliver results to the business, and not merely line my pocket nor make the programmers feel better about their work, even though I'm *also* trying to do those things.

Does any of that address your question well at all? I'm running on less sleep than usual and I expected to be in a car going to an airport right now, so I'm writing this to try to jump-start my brain this morning.
--
J. B. (Joe) Rainsberger :: tdd.training :: jbrains.ca ::
blog.thecodewhisperer.com

J. B. Rainsberger

unread,
Feb 13, 2019, 9:08:04 AM2/13/19
to lonely-coac...@googlegroups.com
Sometimes programmers and their immediate managers feel powerless to affect the wider business system around them, but they feel stress and they want my help. In that situation, it doesn't necessarily help me to show them that I am trying to deliver results to the business. On the contrary, I risk giving the programmers the feeling that I'm there for The Company but not for them, and that damages my ability to help them. 

In these situations, I find it helpful to frame my interventions purely in terms of building spare capacity in order to increase options for them. This way, they become better able to satisfy The Company once The Company figures out what the hell they need from this group. I frame it as "You will become generally more effective, more able to serve The Company, and generally have more energy to deal with Their nonsense." In situations like these, that message seems to balance fairly well serving the group with serving The Company. I teach the group about the value of framing their improvements as business objectives, rather than trying to do that myself.

All this comes from the basic principle that if you're not sure what to do next, then try to increase slack so that you can more easily do what's needed once you figure that out.

J. B. Rainsberger :: tdd.training :: www.jbrains.ca

--

Johan Martinsson

unread,
Feb 14, 2019, 3:59:06 AM2/14/19
to Lonely Coaches Sodality
That makes it very clear what you mean by the three levels. Your explanation is definitely clear.

PS: Let me clarify what I meant by Technical Agile Coach. The vertical bar of my personal T-shape is the non-scrum part of XP, of course the company usually needs both so I usually collaborate with other Agile Coaches. Perhaps there are better names for that, like developer coach, I've certainly been looking for a good commonly understood name.

Johan

Johan Martinsson

unread,
Feb 14, 2019, 4:55:49 AM2/14/19
to Lonely Coaches Sodality
Your two responses indeed address the question very well. It's interesting to see how explicitly you use ToC. There are a lot of useful information in what you wrote, thank you.

One important point in your approach is building slack in / increase spare capacity. I suppose those are the same. To me this means limit how much work is accepted, not increase skill. Practicing and increasing skill does not create spare capacity in as per see, because we can always choose increase the workload to fill the spare capacity created. And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future. Slack and spare capacity are indeed necessary to create space for training, for reducing rework and for preparatory refactoring. So let me play the devil's advocate; When you say you concentrate on creating spare capacity you're suggesting slowing down what is currently observed, in order to do better sometime in the future whenever we figure out what to do with it. This will make it very easy for people to fall back into old habits and for external groups to put more pressure.

I'm pretty sure this is not what you meant and I'm curious what's behind your choice of the words building slack in / increase spare capacity.

J. B. Rainsberger

unread,
Feb 15, 2019, 10:49:38 AM2/15/19
to lonely-coac...@googlegroups.com
On Thu, Feb 14, 2019 at 5:55 AM Johan Martinsson <martinss...@gmail.com> wrote:
 
Your two responses indeed address the question very well. It's interesting to see how explicitly you use ToC. There are a lot of useful information in what you wrote, thank you.

You're welcome.

One important point in your approach is building slack in / increase spare capacity. I suppose those are the same. To me this means limit how much work is accepted, not increase skill.

Increasing skill means increasing capacity, so if we do that alone, then we increase spare capacity.
 
Practicing and increasing skill does not create spare capacity in as per see, because we can always choose increase the workload to fill the spare capacity created.

If we increase skill, we do not need to accept more work requests, so on the contrary I interpret increasing skill as increasing spare capacity.
 
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.

Again, on the contrary, testing and refactoring might help you finish this story sooner, depending on the exact situation. For example, if you haven't started this story yet, better testing and better refactoring might help you avoid rework on this story. The up-front investment in learning usually leads to deferring the savings to the future.
 
Slack and spare capacity are indeed necessary to create space for training, for reducing rework and for preparatory refactoring. So let me play the devil's advocate; When you say you concentrate on creating spare capacity you're suggesting slowing down what is currently observed, in order to do better sometime in the future whenever we figure out what to do with it. This will make it very easy for people to fall back into old habits and for external groups to put more pressure.

Sometimes. Quite often, I see that a group is already wasting capacity that they could better invest in increasing skill (or even just recovering energy). I help them with basic time/energy management, better meetings, and other ways to recover capacity. I also help them visualize their actual work, identifying unplanned work requests that they volunteer to do without realizing it, and so on. I usually worry that I won't find anything, and then I almost always find enough wasted capacity to pay for the learning they need to do.

In the worst case, the group has to admit that they've been going more quickly than they can (working over capacity), and I help them explore the two (general) options of letting the project crash (burnout, quitting, disastrous delivery) or slowing things down to improve (disappointment, tension, anxiety). When a person makes a conscious decision to accept a temporary Death March, they feel better than when they gradually fall into an indefinite unsustainable pace. If they decide to let this project crash, then I encourage them to focus their energy on preparing for the next project, possibly at the next company. When a specific person tells me that they're ready to quit, then I encourage them to try to have frank conversations with decision-makers about what they need to do to stop the next project from crashing and the next wave of people from quitting.

Some people, when they try to improve, risk falling back into old habits. In the worst case, I have to point that out to them when they do it; in the best case, someone in or near their group does that. Often we have to discuss the same things multiple times before the situation changes. It might take a long time for a person to feel comfortable discussing the real problems underneath the symptoms.
 
I'm pretty sure this is not what you meant and I'm curious what's behind your choice of the words building slack in / increase spare capacity.

We can create some spare capacity without slowing down. Slowing down isn't just something that helps in the future; it might help significantly now. Slowing down might be the only thing that saves the project or the group or that part of the business. And some decision-makers don't accept this idea. I care about the results, but I can't force them to see what they won't see; I can only ask questions and offer warnings.

Does that clarify what I mean? Does that become any more helpful for you?
--
J. B. (Joe) Rainsberger :: tdd.training :: jbrains.ca ::
blog.thecodewhisperer.com

On Wednesday, February 13, 2019 at 2:57:37 PM UTC+1, J. B. Rainsberger wrote:
On Mon, Feb 11, 2019 at 10:33 PM Johan Martinsson <martinss...@gmail.com> wrote:
 
Overall I'm convinced that to go as far as possible on the quality path we have to make (almost) every step valuable to business. I did find some answers in this old thread although not specifically on technical coaching. I've also had some success with defining the goal of the coaching, and the indicators that are supposed to predict improvement. A few examples from my experience
  • bugs in QA + code complexity + code feels simpler
  • Bugs in production.
  • Delivery twice as frequently + not more bugs in production
  • Time to validate a feature
What are your thoughts, experiences and suggestions? If you don't do this, why not?

I have been trying to do this for a decade now, and I frankly don't know whether I'm making any real progress. My primary strategy has revolved around using Theory of Constraints/Just in Time to coach teams, emphasizing that we are trying to improve in areas that seem likely to provide the most immediate value to the business. I hope that it helps to show everyone that I'm trying to address real problems and not merely pushing pre-packaged solutions to technical problems in my comfort zone. I do this to try to build trust with everyone, especially managers and even executives.

Often I find myself in the situation where the people in my sphere of influence feel stuck optimizing locally. I advise them against it, but instead encourage them to see their lack of influence as an opportunity to build spare capacity without needing to exploit it. This way, we can focus on improving the overall technical skill of the group without worrying as much about what they specifically produce. For the managers/executives who understand ToC, this works very well, but for the others, I have to decide whether to follow them into the trap of optimizing locally. If they're going to throw money out of the window, I might as well catch it. In that case, I add some stress-reduction techniques, so that as they "do the wrong thing", at least I can help them reduce the risk of burning themselves out in the process.

Now, assuming that either (1) we agree that programming is their bottleneck or (2) they believe that they have to produce better results from programming and I can't convince them otherwise, I try at least to apply the ToC ideas within the smaller system. I can't stop them from optimizing locally if they decide that they need to do it. I decide in this case to help them, then hope that they realize it's a bad idea before they do too much damage. I emphasize finding local bottlenecks, cutting feedback loops in half, measuring cycle time over person-hours of effort, that kind of thing.

Although I don't feel comfortable predicting improvement precisely ("I expect 60%-80% reduction in defects"), I continuously describe the outcomes that I expect from a change in work process and I try to frame them in direct business terms, or in the worst case, in ToC terms. I find that once we agree on the scope of the (working) system we're analyzing and the basic principles of throughput accounting and bottlenecks, it becomes clear which intermediate goals make sense and which don't. We also often have open discussions about the difference between improving towards a specific business-oriented goal and improving towards increasing spare capacity (like buying a lottery ticket). When we state our intentions more clearly, it becomes easier to justify various experiments, and when conflicts arise, it becomes easier to justify our decisions about the priority of those experiments. ("Let's stop TDD here for now, because suddenly we need to focus on just automating regression tests for this part of the system, and we'll worry about the design later.")

So I present ToC as a way to figure out which goals make sense for an engagement, as well as a framework for deciding which specific interventions to consider next. For some, ToC sounds "business-y" enough to make it clear that I'm trying to deliver results to the business, and not merely line my pocket nor make the programmers feel better about their work, even though I'm *also* trying to do those things.

Does any of that address your question well at all? I'm running on less sleep than usual and I expected to be in a car going to an airport right now, so I'm writing this to try to jump-start my brain this morning.
--
J. B. (Joe) Rainsberger :: tdd.training :: jbrains.ca ::
blog.thecodewhisperer.com

--

Rob Myers

unread,
Feb 15, 2019, 5:09:52 PM2/15/19
to lonely-coac...@googlegroups.com
On Feb 12, 2019, at 12:18 AM, Johan Martinsson <martinss...@gmail.com> wrote:

Frankly I'm a bit lost. 

Apologies if this is too late, or repeating something someone else said, or not at all related. For some reason this particular message jumped out at me. I think it was your first sentence. I hope I can help in some small way.

I often find myself having to justify tech practices to execs. I take two approaches:

1. I talk about why professional chefs sharpen their knives. Short read: https://agileforall.com/sharpen-your-knives/

2. I tell them one or two of my 1st-person success stories with what I like to call “Black Swan User Stories”.

There are three levels of benefit to tech practices. These three all seem to hinge on one practice in particular: The ability to refactor with confidence is key to delivering value iteratively and with faster feedback, without compromising any of the value in which we’ve already invested time and money. 

The other tech practices (TDD, BDD, CI, pairing, mobbing) all support diligent, confident refactoring in some way. So refactoring, and its supporting practices, are the developer-equivalent of sharpening the knives. 

Back to the business benefits. Most people in our industry are familiar with the first and perhaps the second.  Only teams who stick to the discipline over time experience the third.

 1. Lower defects. Sometimes it’s an old microtest that fails, and thus warns you that you just unexpectedly broke something. But usually it goes more like this:  You write a test for something you want the code to do.  You run it and it fails as expected. You write the code that you believe implements that behavior and will pass the test, but then it fails again!  “Oh, I have a one-off error,” or “Oh, I have one too few Boolean negations…” The microtest caught the error before you had a chance to move on to the next bit of code. Without that test, your bug may have stayed dormant until it woke up and bit your customers.

2. Faster enhancements.  Over time, a powerful safety-net of blazing fast microtests covers the code in a protective shell of engineering specifications.  Teams can rapidly refactor, i.e., reshape design to accommodate new features (or “User Stories” as they are known to Agile teams). We choose to make changes to the design in tiny steps, and if a test tells us we broke something, the feedback is immediate, and we can roll back to a known good state using Ctrl-Z (Cmd-Z) or using our repository software. When you know you have a safety-net, you can alter the design more quickly and confidently. This gives you the ability to complete user stories as fast as if you were writing them from scratch (or faster, because you’ve already written useful supporting code for another story).

3. Eventually, something even more incredible may occur. A new, innovative, and often surprising feature is requested.  I call these “Black Swan User Stories”:  “Black Swan” is the term used for a disruptive, impossible-to-predict, and risky event, and “User Story,” the common Agile product backlog item.  A “Black Swan User Story,” then, is a story created spontaneously by someone on the team (often the Product Advocate, or—in Scrum terminology—the Product Owner) that utilizes the existing product features, architecture, and team’s talents in a novel way, resulting in a significant increase in value. I’ve seen these open up an entirely new market for the product. I’ve seen at least one of these occur for every team who has embraced TDD and diligent refactoring as their principle way of writing code. The caveat: You have to stick to it. 

The first of these was in 2001 while I was working with Jim Shore. For various reasons, we had limited ourselves to UTF-8/Latin-American character sets. We had started out small, and with limited time, money, and goals. Enter the Black Swan User Story from East Asia! “Please re-internationalize the code that handles all user input-output, so it will properly store and display Japanese, Chinese, and Korean characters.” A major architectural change. An unexpected request. A lot of juicy business value upside. 

And we completed the story in three days. 

I recall very clearly thinking “Wow! That proved that all our eXtreme Programming efforts were totally worth it! But that’s a once-in-a-lifetime event. Won’t happen again…

It happened at least twice in 2002, on my Menlo Innovations team in Ann Arbor.  "Dynamically convert all HTML reports to PDF reports? No problem. 3 days.” and "Help another group convert their FoxBase database to Oracle, and clean up the schema and the data at the same time? Oh, an hour to train the other team. We already built a converter, because we were continuously converting part of another database, nightly."

It happened once in 2003 during the 6-month period where I was the XP coach for a Bay Area CMBS loan securitization group. "We have to completely revamp the XML schema we were using for B2B transactions? Let’s snap on an XSLT transform!” (That idea came from a “junior" dev with less than 2 years of experience). We used JUnit to test-drive the XSLT. A few days each week, instead of months of rework. We also promoted that “junior” dev to Senior Dev in less than 6 months. 

Black Swans may be rare and unpredictable, but they will fly over your pool of code from time to time. If you live in a swamp, they will continue on (“That’s a Major Architectural Change! Sorry, that would take us six months. Still want to do it? No? We thought not. Hey, no one else on Earth will be able to do it either. It’s…impossible! Now let’s get back to the reality of updating our infinite mediocre Jira tickets...”). 

If you have kept your code clean, though, they will gladly land. (“We can have it for you in one sprint. Oh, it’s likely going to double the product’s revenue stream? Sweet! We look forward to those annual bonuses!”)   

The Black Swans will arrive, eventually, but unless you’ve turned your code’s swamp into a fairly clear lake before they arrive, they will not land for you.

Like I said, this may be apropos of nothing. But I doubt it.

Rob
@agilecoach


-- 

--- 
You received this message because you are subscribed to the Google Groups "Lonely Coaches Sodality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lonely-coaches-so...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Johan Martinsson

unread,
Feb 16, 2019, 5:28:58 AM2/16/19
to lonely-coac...@googlegroups.com

If we increase skill, we do not need to accept more work requests, so on the contrary I interpret increasing skill as increasing spare capacity.
I do see your reasoning, but that requires a rather strict notion of amount of work. A measure that stays the same over time, like velocity or throughput. In reality I've rarely been able to keep the same reference stories over any extended amount of time. Nevertheless quite a few people claim getting good results with measuring performance through velocity gains, while I doubt this is very useful, it would mean that it'd be possible to use it to increase spare capacity. Perhaps I've mostly lacked discipline here and could experiment this again. 

 
 
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.

Again, on the contrary, testing and refactoring might help you finish this story sooner, depending on the exact situation. For example, if you haven't started this story yet, better testing and better refactoring might help you avoid rework on this story. The up-front investment in learning usually leads to deferring the savings to the future.

Only in a few cases have I felt that I was finishing a story faster by TDD-ing, even less so in the early stages of adoption. Given my personal experience, that's not the primary thing I promote about it. However measuring lack of rework is effective. I certainly use that. 
 
 
Sometimes. Quite often, I see that a group is already wasting capacity that they could better invest in increasing skill (or even just recovering energy). I help them with basic time/energy management, better meetings, and other ways to recover capacity. I also help them visualize their actual work, identifying unplanned work requests that they volunteer to do without realizing it, and so on. I usually worry that I won't find anything, and then I almost always find enough wasted capacity to pay for the learning they need to do.
Interesting, I'll keep that in mind
 
Does that clarify what I mean? Does that become any more helpful for you?
It was already helpful, now I feel I pretty much understand all of your points.



Johan Martinsson

unread,
Feb 16, 2019, 6:43:45 PM2/16/19
to lonely-coac...@googlegroups.com

Apologies if this is too late, or repeating something someone else said, or not at all related. For some reason this particular message jumped out at me. I think it was your first sentence. I hope I can help in some small way.
By no means too late :) I appreciate the help
 
1. I talk about why professional chefs sharpen their knives. Short read: https://agileforall.com/sharpen-your-knives/
... 
So refactoring, and its supporting practices, are the developer-equivalent of sharpening the knives. 

I first thought sharpening the knives was referring to training, like a refactoring exercice, but it's actually referring to refactoring, in particular preparatory refactoring. It's a nice metaphor. So do you also talk about training for refactoring and testing? Do you use the same metaphor for training? 

I usually explain that refactoring allows us to remove accidental complexity, which we only can do if we have tests to protect it. The point is that if we train testing and refactoring, we not only write the tests faster but also remove more accidental complexity when we have them! This has been effective to get peoples attention, both from developers and executives.

Quite often I've seen that this completely changes the way people work, they start to see how much better the code could be, and given they know how to write reasonable tests in reasonable time they start to consider it's worth doing it. Like if there was a local optimum before, given their skills at that time. As we train the local optimum is shifted towards better code and more tests. That's one of the reason I've been focusing more and more on training, because of the inevitable momentum it creates.

Still lately, one team struggling with massive legacy is taking much longer than I thought to move. I think one way of helping them is to find simpler, shorter experiments they could try and get some feedback on whether it could work for them.
 

The Black Swans will arrive, eventually, but unless you’ve turned your code’s swamp into a fairly clear lake before they arrive, they will not land for you.

Like I said, this may be apropos of nothing. But I doubt it.

That's a beautify metaphor, I really like it. I guess it also matters how you tell it, there's an emotional potential in it. Thanks.
 

J. B. Rainsberger

unread,
Feb 28, 2019, 11:34:28 AM2/28/19
to lonely-coac...@googlegroups.com
On Sat, Feb 16, 2019 at 6:28 AM Johan Martinsson <martinss...@gmail.com> wrote:

If we increase skill, we do not need to accept more work requests, so on the contrary I interpret increasing skill as increasing spare capacity.
 
I do see your reasoning, but that requires a rather strict notion of amount of work. A measure that stays the same over time, like velocity or throughput. In reality I've rarely been able to keep the same reference stories over any extended amount of time. Nevertheless quite a few people claim getting good results with measuring performance through velocity gains, while I doubt this is very useful, it would mean that it'd be possible to use it to increase spare capacity. Perhaps I've mostly lacked discipline here and could experiment this again. 

I don't believe argue that we can measure the amount of extra spare capacity, except in retrospect. Even so, if increasing skill doesn't mean increasing capacity, then what else could it mean? We increase capacity either by judging better which work not to do or by making fewer mistakes or by reducing the cost of those mistakes or by generating more profit from the capacity that we use. I interpret "generate more profit" generally as "increase capacity", because I think of capacity in terms of throughput.

I don't understand the nature of our disagreement here. :)
 
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.

Again, on the contrary, testing and refactoring might help you finish this story sooner, depending on the exact situation. For example, if you haven't started this story yet, better testing and better refactoring might help you avoid rework on this story. The up-front investment in learning usually leads to deferring the savings to the future.

Only in a few cases have I felt that I was finishing a story faster by TDD-ing, even less so in the early stages of adoption. Given my personal experience, that's not the primary thing I promote about it. However measuring lack of rework is effective. I certainly use that. 

"Faster" than what? I can easily argue that "finishing" a story sooner, but then having more rework later, means that we did not judge "finished" well in the past. Once again, we can only know this in retrospect. It might work better to focus on the flow of features over time, rather than the time it takes to "finish" any individual story. Sometimes we need to focus on the time to deliver something for *this* story, but we know that if we keep our focus their for too long, then we risk reducing capacity over time, and moreover that capacity will eventually crash without warning.

Sometimes. Quite often, I see that a group is already wasting capacity that they could better invest in increasing skill (or even just recovering energy). I help them with basic time/energy management, better meetings, and other ways to recover capacity. I also help them visualize their actual work, identifying unplanned work requests that they volunteer to do without realizing it, and so on. I usually worry that I won't find anything, and then I almost always find enough wasted capacity to pay for the learning they need to do.
Interesting, I'll keep that in mind
 
Does that clarify what I mean? Does that become any more helpful for you?
It was already helpful, now I feel I pretty much understand all of your points.

Thanks!

J. B. Rainsberger

unread,
Feb 28, 2019, 11:42:27 AM2/28/19
to lonely-coac...@googlegroups.com
On Sat, Feb 16, 2019 at 7:43 PM Johan Martinsson <martinss...@gmail.com> wrote:
 
So refactoring, and its supporting practices, are the developer-equivalent of sharpening the knives. 

[...] I usually explain that refactoring allows us to remove accidental complexity, which we only can do if we have tests to protect it.

I get nervous reading "only" here. I recommend something more like "Most of the time, I don't feel confident refactoring without tests. I tend to accept that risk only in situations where I'm refactoring in order to save significant costs to adding tests to existing code; otherwise, it doesn't feel worth the risk."

The point is that if we train testing and refactoring, we not only write the tests faster but also remove more accidental complexity when we have them! This has been effective to get peoples attention, both from developers and executives.

I see this as a key problem holding us back: when teaching refactoring and especially rescuing legacy code, the audience sees only additional cost, not even investment, and most of those don't have the patience and foresight to see the return on investment. I should them a compounding curve, and some of them get it, but I still run into the "delayed results" problem, so they find it easy to assume that they won't see profit early enough to pay back the investment.

Quite often I've seen that this completely changes the way people work, they start to see how much better the code could be, and given they know how to write reasonable tests in reasonable time they start to consider it's worth doing it. Like if there was a local optimum before, given their skills at that time. As we train the local optimum is shifted towards better code and more tests. That's one of the reason I've been focusing more and more on training, because of the inevitable momentum it creates.

When I reach the point with a group of showing them how refactoring simplifies their "annoying" tests, that usually makes some strong impact and causes them to feel more comfortable learning more and practising more. I find this hard to do in an open training course, but relatively easy to do when I can spend 2-3 days with a single cohesive small group of people. They start to see the momentum and they want to try to keep it going on their own.

Still lately, one team struggling with massive legacy is taking much longer than I thought to move. I think one way of helping them is to find simpler, shorter experiments they could try and get some feedback on whether it could work for them.

Indeed. I often have to drive the refactoring a lot on the first day or two, in order to take advantage of my situational judgment, lack of "pickledness" (learned helplessness in that code base), and general fluidity of technique. My challenge then becomes to help them feel confident that they can continue from where I have left off, and I don't know how well I manage to do that. Worst case, they can hire me for a 90-minute remote pairing session a few weeks later in order to follow up.

Jon Kern

unread,
Mar 7, 2019, 2:15:49 PM3/7/19
to lonely-coac...@googlegroups.com
Hey Network!

Asking for a friend ;-)

Do you know of anyone in the Buffalo, NY, or Wilmington DE/Baltimore MD areas that may be interested in helping with a lean / agile transformation (financial services) over the next few years? Great backing from the C-level.

Coaches needed for Process, Product, UX, Engineering, DevOps…

Open to employee and subcontracting. Mostly full-time, but possibly some intervals.

If interested, I can send you a PDF with more details and a contact.


Tim Ottinger

unread,
Mar 9, 2019, 12:32:50 PM3/9/19
to lonely-coac...@googlegroups.com
Amitai? He usually doesn’t go too far from NYC I think.

You should talk to Josh at IL. We’ve expanded a lot on what we offer in the past few years, including the usual training and non-contiguous weeks of coaching, but extended to long-term coaching, staff augmentation, and even a school for immersive developer training (IL Academy).  We might have something that fits the bill. 

Why don’t you send me the information, because I have a few friends who might be interested. 

Peace,
Tim
--
Tim Ottinger, Anzeneer, Industrial Logic
-------------------------------------
http://www.industriallogic.com/
http://agileotter.blogspot.com/

Zee Spencer

unread,
Mar 9, 2019, 1:50:17 PM3/9/19
to lonely-coac...@googlegroups.com
Hey Jon,

I'd be happy to offer my (and my teams) skills as technical trainers/coaches. We focus mostly on pairing, refactoring, etc. Reach back out to me directly if you are interested!

Our specialty is legacy front-end code (Via Betsy Haibel); and I do a bit of infrastructure and CI stuff. Jennifer focuses on 1 on 1 and group engineering management and director coaching; but is happy to jump in as a pair helping guide people through refactoring.

We all have fin-tech experience as well.

Zee

Amitai Schleier

unread,
Mar 11, 2019, 5:48:27 PM3/11/19
to lonely-coac...@googlegroups.com

Thanks for thinking of me, Tim. It's true, I'm almost entirely limited to NYC commute radius. If they're open to occasional on-site and mostly remote, that would change the equation for me. But in that case they'd have lots more options than just me. :-p

I can't think of anyone who's already near Buffalo. Near-ish Baltimore, I'd ask George Dinwiddie to see if he has spare capacity and could help. +1 on talking to Zee and to Josh, too.

- Amitai

--

Tim Ottinger

unread,
Mar 11, 2019, 5:50:30 PM3/11/19
to lonely-coac...@googlegroups.com
847-660-4523 but not between 9am and 7pm EDT.

I’m with clients all day,
Reply all
Reply to author
Forward
0 new messages