With the rapid rise of AI coding assistants, there has been a lot of discussion about whether AI will replace software engineers. But I think a more fundamental question is worth asking:
Has AI actually made software better?
Specifically, in real-world production systems, have we observed measurable improvements in:
Software quality (fewer bugs, better reliability)?
Performance (faster execution, lower latency)?
Memory efficiency (lower memory footprint, fewer allocations)?
Overall user experience?
AI has clearly made it much faster to generate code. But software engineering has never been primarily constrained by typing speed. The real challenges have always been around system design, managing complexity, defining correct abstractions, and making sound trade-offs.
In your experience, has AI led to objectively better systems along these dimensions? Or has it mainly accelerated code production without fundamentally improving the underlying quality characteristics?
I’m particularly interested in observations from production environments rather than small demos or prototypes.
I fully agree. I recently revisited No Silver Bullet and found it still highly applicable in the AI era. Fred Brooks observed that “no single development, in either technology or management technique, … promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.”
In my view, AI does not change the core point: meaningful gains will come from a portfolio of improvements rather than a single silver bullet.
--
You received this message because you are subscribed to the Google Groups "software-design-book" group.
To unsubscribe from this group and stop receiving emails from it, send an email to software-design-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/software-design-book/f1f585f9-89f1-4eb2-ab77-dccc48ea7eaan%40googlegroups.com.
KenThe DORA research program tries to study the capabilities that software teams must have in order to drive software delivery, and recently they have been looking into the effects AI adoption in the workplace. It's based on a survey of five-thousand professionals. You can view the most recent report here:I think it gives a good answer on your question about software quality. The report states:"[...] a majority (59%) of survey respondents also observe that AI has positively impacted their code quality. 31% perceive this increase to be only “slight” and another 30% observe neither positive nor negative impacts. However, just 10% of respondents perceive any negative impacts on their code quality as a result of AI use."They also find that the effect on code quality is amplified when combined with AI access to internal data.
--
You received this message because you are subscribed to the Google Groups "software-design-book" group.
To unsubscribe from this group and stop receiving emails from it, send an email to software-design-...@googlegroups.com.
And then there's the ethical question of, okay if this thing is starting to actually become intelligent --- and it's solving intellectual problems that used to require humans to solve --- at what point does controlled use of AI constitute slavery? What is our obligation in providing training data that exposes AIs to the fullness of reality? In providing embodiment? How will future machine intelligences look back on our treatment of their ancestors?
The old joke about computers being rocks that we tricked into thinking seems a lot more prophetic and a lot less funny, these days.
paul
While nearly 91% of developers use AI assistants, the promised efficiency gains have largely plateaued around 10%.
The Chaperoning Effect: Randomized controlled trials show that while developers believe AI makes them 20% faster, they are actually 19% slower on complex tasks. This is due to the time required to review and debug "almost-right" AI code.
Onboarding Gains: One notable success is in onboarding, where the time for new hires to reach their 10th pull request has been cut in half.
The volume of code has increased, but its integrity has declined.
Issue Multiplier: AI-assisted pull requests contain 1.7 times more issues than human-authored ones, particularly regarding logic and correctness.
Security Vulnerabilities: Over 51% of AI-generated programs contain at least one security vulnerability, and credential exposure occurs nearly twice as often as in manual coding.
The ease of generating code has led to "vibe coding," which prioritizes local fixes over global system health.
Collapsing Maintenance: Refactoring activity has collapsed by 60%, while code duplication has increased by 48% as AI tools generate similar solutions without recognizing opportunities for abstraction.
The "AI Slop" Crisis: Codebases are becoming "semantically hollow," meaning they function but no longer accurately reflect complex business logic, making them harder for humans to maintain.
The impact on system performance is split between low-level gains and application-level bloat.
Compiler Optimization Success: Frameworks like Google’s MLGO have used machine learning to achieve a 3% to 7% reduction in binary size and slight improvements in datacenter queries per second (QPS).
Incident Management: AI-powered observability has significantly improved operational resilience, reducing Mean Time to Resolution (MTTR) by 40% to 70% and customer-visible outages by 30% to 50%.
By 2026, the developer's role is shifting from manual implementer to "architectural orchestrator". Successful teams have moved away from "mega-prompts" toward strategic decomposition, treating AI-generated code as untrusted and funneling it through rigorous automated quality gates. The consensus among engineering leaders is that organizations must "vibe, then verify" to prevent AI-driven velocity from destroying long-term system stability.
To view this discussion visit https://groups.google.com/d/msgid/software-design-book/159f61e7-ac7c-41bf-83c8-fff569d0548d%40gmail.com.