Here's a quote right from the beginning of [4]
> the impact on the development team could be reduced by providing a better framework that enabled the test authors to implement a greater proportion of the tests themselves, independently of the development team.
This snippet reveals quite a lot to me about the blog author's experience with TDD / BDD. Namely:
1) They see a clear separation between 'the development team' and a QA team of 'test authors'
2) They think that doing TDD has an (implicitly negative) impact on that development team
3) They think that having those 'test authors' developing tests 'independently of the development team' is a good idea.
This strikes me as a context where the team are still, basically, building software backwards[6]. Despite what the post says about them building their tests first, I'm not convinced this is a team who can teach me anything new about BDD. When I've seen BDD work, it's because the same people who write the code are involved in writing the tests. This involvement is where the magic happens: it's when you write the test that you get a chance to consider and gain insights into the work you're about to do. A tool that helps reduce this 'burden' seems to me to be entirely misguided.
Using Cucumber does require investment in writing automation code, but my experience is that writing that code teaches us useful things about how easy our system is to automate. Time well spent.
I've got three things you might have missed, which come down to competence, trust, and domain learning.
Competence
What you're perhaps missing is the sheer number of teams who use Cucumber but don't have the experience, confidence, or skill to maintain what you or I would call a well-factored step definition / glue code layer. We haven't made nearly enough effort to explain to people how to do this (I hope to remedy this with my talk at CukeUp) and even if we had, so many teams sadly still make test maintenance the responsibility of people who are just not good programmers. People in that situation are naturally going to want a way to do less (apparent) programming.
Trust
Additionally, I think a lot of the imperative vs declarative debate that I see teams have comes down to trust. I sometimes show this scenario as an example of taking the declarative style too far:
Given the system is running
When I use the system
Then it should work, perfectly
This is a joke, but it makes a serious point: if you were to use this scenario for your app, you'd be placing a great deal of trust in whoever wrote the step definition for "it should work, perfectly". If you're reading this scenario, you need to feel that trust in order to actually believe that this behaviour is implemented. Having a way for non-technical people (I would include many QA folk in this) to look beneath the declarative what and see the more imperative how might help those people to trust what the automated tests are doing.
Domain learning
What I have also seen is that the level of abstraction a team wants to use in their scenarios will change over time. Early on in a project when they're still learning about the domain, they'll want to put in a lot of detail. Each new piece of learning feels worth writing down, so their scenarios reflect this. Over time, as they gain more understanding of the domain, they feel safer to imply more things in their descriptions of the behaviour, and the scenarios become more declarative, more abstract.
I think this is natural, and I don't think it's something you can rush, or force. That's the one justifiable reason I can see for having some kind of macro support built into Gherkin, so it's possible to easily and safely refactor scenarios that contain too much detail, to make them more declarative. I can see how it would be abused by a team like the one from [4], and I'm not sure whether that's a trade-off that's worth it. I certainly wouldn't get sucked into adding the equivalent of #step or #steps to Cucumber-JVM; I think that's definitely the wrong approach.