--
---
You received this message because you are subscribed to the Google Groups "Lonely Coaches Sodality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lonely-coaches-so...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I'm not completely sure what the intent of the question is. Are you offering an alternative to demos/dojo/mobbing?
Overall I'm convinced that to go as far as possible on the quality path we have to make (almost) every step valuable to business. I did find some answers in this old thread although not specifically on technical coaching. I've also had some success with defining the goal of the coaching, and the indicators that are supposed to predict improvement. A few examples from my experience
- bugs in QA + code complexity + code feels simpler
- Bugs in production.
- Delivery twice as frequently + not more bugs in production
- Time to validate a feature
What are your thoughts, experiences and suggestions? If you don't do this, why not?
--
Your two responses indeed address the question very well. It's interesting to see how explicitly you use ToC. There are a lot of useful information in what you wrote, thank you.
One important point in your approach is building slack in / increase spare capacity. I suppose those are the same. To me this means limit how much work is accepted, not increase skill.
Practicing and increasing skill does not create spare capacity in as per see, because we can always choose increase the workload to fill the spare capacity created.
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.
Slack and spare capacity are indeed necessary to create space for training, for reducing rework and for preparatory refactoring. So let me play the devil's advocate; When you say you concentrate on creating spare capacity you're suggesting slowing down what is currently observed, in order to do better sometime in the future whenever we figure out what to do with it. This will make it very easy for people to fall back into old habits and for external groups to put more pressure.
I'm pretty sure this is not what you meant and I'm curious what's behind your choice of the words building slack in / increase spare capacity.
--On Mon, Feb 11, 2019 at 10:33 PM Johan Martinsson <martinss...@gmail.com> wrote:Overall I'm convinced that to go as far as possible on the quality path we have to make (almost) every step valuable to business. I did find some answers in this old thread although not specifically on technical coaching. I've also had some success with defining the goal of the coaching, and the indicators that are supposed to predict improvement. A few examples from my experience
- bugs in QA + code complexity + code feels simpler
- Bugs in production.
- Delivery twice as frequently + not more bugs in production
- Time to validate a feature
What are your thoughts, experiences and suggestions? If you don't do this, why not?I have been trying to do this for a decade now, and I frankly don't know whether I'm making any real progress. My primary strategy has revolved around using Theory of Constraints/Just in Time to coach teams, emphasizing that we are trying to improve in areas that seem likely to provide the most immediate value to the business. I hope that it helps to show everyone that I'm trying to address real problems and not merely pushing pre-packaged solutions to technical problems in my comfort zone. I do this to try to build trust with everyone, especially managers and even executives.Often I find myself in the situation where the people in my sphere of influence feel stuck optimizing locally. I advise them against it, but instead encourage them to see their lack of influence as an opportunity to build spare capacity without needing to exploit it. This way, we can focus on improving the overall technical skill of the group without worrying as much about what they specifically produce. For the managers/executives who understand ToC, this works very well, but for the others, I have to decide whether to follow them into the trap of optimizing locally. If they're going to throw money out of the window, I might as well catch it. In that case, I add some stress-reduction techniques, so that as they "do the wrong thing", at least I can help them reduce the risk of burning themselves out in the process.Now, assuming that either (1) we agree that programming is their bottleneck or (2) they believe that they have to produce better results from programming and I can't convince them otherwise, I try at least to apply the ToC ideas within the smaller system. I can't stop them from optimizing locally if they decide that they need to do it. I decide in this case to help them, then hope that they realize it's a bad idea before they do too much damage. I emphasize finding local bottlenecks, cutting feedback loops in half, measuring cycle time over person-hours of effort, that kind of thing.Although I don't feel comfortable predicting improvement precisely ("I expect 60%-80% reduction in defects"), I continuously describe the outcomes that I expect from a change in work process and I try to frame them in direct business terms, or in the worst case, in ToC terms. I find that once we agree on the scope of the (working) system we're analyzing and the basic principles of throughput accounting and bottlenecks, it becomes clear which intermediate goals make sense and which don't. We also often have open discussions about the difference between improving towards a specific business-oriented goal and improving towards increasing spare capacity (like buying a lottery ticket). When we state our intentions more clearly, it becomes easier to justify various experiments, and when conflicts arise, it becomes easier to justify our decisions about the priority of those experiments. ("Let's stop TDD here for now, because suddenly we need to focus on just automating regression tests for this part of the system, and we'll worry about the design later.")So I present ToC as a way to figure out which goals make sense for an engagement, as well as a framework for deciding which specific interventions to consider next. For some, ToC sounds "business-y" enough to make it clear that I'm trying to deliver results to the business, and not merely line my pocket nor make the programmers feel better about their work, even though I'm *also* trying to do those things.Does any of that address your question well at all? I'm running on less sleep than usual and I expected to be in a car going to an airport right now, so I'm writing this to try to jump-start my brain this morning.
--
On Feb 12, 2019, at 12:18 AM, Johan Martinsson <martinss...@gmail.com> wrote:Frankly I'm a bit lost.
Back to the business benefits. Most people in our industry are familiar with the first and perhaps the second. Only teams who stick to the discipline over time experience the third.
1. Lower defects. Sometimes it’s an old microtest that fails, and thus warns you that you just unexpectedly broke something. But usually it goes more like this: You write a test for something you want the code to do. You run it and it fails as expected. You write the code that you believe implements that behavior and will pass the test, but then it fails again! “Oh, I have a one-off error,” or “Oh, I have one too few Boolean negations…” The microtest caught the error before you had a chance to move on to the next bit of code. Without that test, your bug may have stayed dormant until it woke up and bit your customers.
2. Faster enhancements. Over time, a powerful safety-net of blazing fast microtests covers the code in a protective shell of engineering specifications. Teams can rapidly refactor, i.e., reshape design to accommodate new features (or “User Stories” as they are known to Agile teams). We choose to make changes to the design in tiny steps, and if a test tells us we broke something, the feedback is immediate, and we can roll back to a known good state using Ctrl-Z (Cmd-Z) or using our repository software. When you know you have a safety-net, you can alter the design more quickly and confidently. This gives you the ability to complete user stories as fast as if you were writing them from scratch (or faster, because you’ve already written useful supporting code for another story).
3. Eventually, something even more incredible may occur. A new, innovative, and often surprising feature is requested. I call these “Black Swan User Stories”: “Black Swan” is the term used for a disruptive, impossible-to-predict, and risky event, and “User Story,” the common Agile product backlog item. A “Black Swan User Story,” then, is a story created spontaneously by someone on the team (often the Product Advocate, or—in Scrum terminology—the Product Owner) that utilizes the existing product features, architecture, and team’s talents in a novel way, resulting in a significant increase in value. I’ve seen these open up an entirely new market for the product. I’ve seen at least one of these occur for every team who has embraced TDD and diligent refactoring as their principle way of writing code. The caveat: You have to stick to it.
The first of these was in 2001 while I was working with Jim Shore. For various reasons, we had limited ourselves to UTF-8/Latin-American character sets. We had started out small, and with limited time, money, and goals. Enter the Black Swan User Story from East Asia! “Please re-internationalize the code that handles all user input-output, so it will properly store and display Japanese, Chinese, and Korean characters.” A major architectural change. An unexpected request. A lot of juicy business value upside.
The Black Swans will arrive, eventually, but unless you’ve turned your code’s swamp into a fairly clear lake before they arrive, they will not land for you.
--
---
You received this message because you are subscribed to the Google Groups "Lonely Coaches Sodality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lonely-coaches-so...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
If we increase skill, we do not need to accept more work requests, so on the contrary I interpret increasing skill as increasing spare capacity.
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.Again, on the contrary, testing and refactoring might help you finish this story sooner, depending on the exact situation. For example, if you haven't started this story yet, better testing and better refactoring might help you avoid rework on this story. The up-front investment in learning usually leads to deferring the savings to the future.
Sometimes. Quite often, I see that a group is already wasting capacity that they could better invest in increasing skill (or even just recovering energy). I help them with basic time/energy management, better meetings, and other ways to recover capacity. I also help them visualize their actual work, identifying unplanned work requests that they volunteer to do without realizing it, and so on. I usually worry that I won't find anything, and then I almost always find enough wasted capacity to pay for the learning they need to do.
Does that clarify what I mean? Does that become any more helpful for you?
Apologies if this is too late, or repeating something someone else said, or not at all related. For some reason this particular message jumped out at me. I think it was your first sentence. I hope I can help in some small way.
1. I talk about why professional chefs sharpen their knives. Short read: https://agileforall.com/sharpen-your-knives/
...
So refactoring, and its supporting practices, are the developer-equivalent of sharpening the knives.
The Black Swans will arrive, eventually, but unless you’ve turned your code’s swamp into a fairly clear lake before they arrive, they will not land for you.
Like I said, this may be apropos of nothing. But I doubt it.
If we increase skill, we do not need to accept more work requests, so on the contrary I interpret increasing skill as increasing spare capacity.
I do see your reasoning, but that requires a rather strict notion of amount of work. A measure that stays the same over time, like velocity or throughput. In reality I've rarely been able to keep the same reference stories over any extended amount of time. Nevertheless quite a few people claim getting good results with measuring performance through velocity gains, while I doubt this is very useful, it would mean that it'd be possible to use it to increase spare capacity. Perhaps I've mostly lacked discipline here and could experiment this again.
And in particular technical skill, like for instance testing and refactoring does not make me finish a story quicker but creates less rework for the future.Again, on the contrary, testing and refactoring might help you finish this story sooner, depending on the exact situation. For example, if you haven't started this story yet, better testing and better refactoring might help you avoid rework on this story. The up-front investment in learning usually leads to deferring the savings to the future.Only in a few cases have I felt that I was finishing a story faster by TDD-ing, even less so in the early stages of adoption. Given my personal experience, that's not the primary thing I promote about it. However measuring lack of rework is effective. I certainly use that.
Sometimes. Quite often, I see that a group is already wasting capacity that they could better invest in increasing skill (or even just recovering energy). I help them with basic time/energy management, better meetings, and other ways to recover capacity. I also help them visualize their actual work, identifying unplanned work requests that they volunteer to do without realizing it, and so on. I usually worry that I won't find anything, and then I almost always find enough wasted capacity to pay for the learning they need to do.Interesting, I'll keep that in mindDoes that clarify what I mean? Does that become any more helpful for you?It was already helpful, now I feel I pretty much understand all of your points.
So refactoring, and its supporting practices, are the developer-equivalent of sharpening the knives.
[...] I usually explain that refactoring allows us to remove accidental complexity, which we only can do if we have tests to protect it.
The point is that if we train testing and refactoring, we not only write the tests faster but also remove more accidental complexity when we have them! This has been effective to get peoples attention, both from developers and executives.
Quite often I've seen that this completely changes the way people work, they start to see how much better the code could be, and given they know how to write reasonable tests in reasonable time they start to consider it's worth doing it. Like if there was a local optimum before, given their skills at that time. As we train the local optimum is shifted towards better code and more tests. That's one of the reason I've been focusing more and more on training, because of the inevitable momentum it creates.
Still lately, one team struggling with massive legacy is taking much longer than I thought to move. I think one way of helping them is to find simpler, shorter experiments they could try and get some feedback on whether it could work for them.
Thanks for thinking of me, Tim. It's true, I'm almost entirely limited to NYC commute radius. If they're open to occasional on-site and mostly remote, that would change the equation for me. But in that case they'd have lots more options than just me. :-p
I can't think of anyone who's already near Buffalo. Near-ish Baltimore, I'd ask George Dinwiddie to see if he has spare capacity and could help. +1 on talking to Zee and to Josh, too.
- Amitai
--