Food for thought: "The Cathedral, the Bazaar, and the Winchester Mystery House"

31 views
Skip to first unread message

David Szent-Györgyi

unread,
Apr 25, 2026, 1:25:50 PM (6 days ago) Apr 25
to leo-editor
This article is worth reading, all the way to the end. It describes the change that AI brings to software development, resulting in a third model of that development that exists beside those of the Cathedral and the Bazaar: The Winchester Mystery House: 

So what do we do with all this cheap code?

Unfortunately, everything else remains roughly the same cost and roughly the same speed. Feedback hasn’t gotten cheaper; the “eyeballs” that guided the software developed by the bazaar haven’t caught up to AI.

There is only one source of feedback that moves at the speed of AI-generated code: yourself. You’re there to prompt, you’re there to review. You don’t need to recruit testers, run surveys, or manage design partners. You just build what you want, and use what you build.

And that’s what many developers are doing with cheap code: building idiosyncratic tools for ourselves, guided by our passions, taste, and needs.


It argues that AI coding agents greatly increase the speed with which code is generated. The agent-generated code that is submitted to open source projects vastly increases the volume of submissions without increasing the number of reviewers or the infrastructure used for coordinating open source software development. It then goes on to draw lessons for making the Bazaar and the Winchester Mystery House cooexist, suggest strategies for creators of agentic code engines, and point at the need for new tools. 

David Szent-Györgyi

unread,
Apr 25, 2026, 1:32:44 PM (6 days ago) Apr 25
to leo-editor
My thought is that better bug-hunting is as much of a need as communication for people involved in collaborative software projects. My first thought is that using Pydantic and Hypothesis make sense for Python-based projects. I expect to learn about that in upcoming job-related experience with software other than Leo. 

Thomas Passin

unread,
Apr 25, 2026, 1:43:54 PM (6 days ago) Apr 25
to leo-editor
Some important things are getting lost with the rush to AI coding. LLMs don't have any coherent and consistent view of large-scale structure. That's essential for large systems especially. They don't have good reasoning ability. I've had them flail away like any intern. And most of all, correctness and freedom from bugs and errors cannot be achieved by testing alone.

In addition, good specifications are more important than ever for AI-written software  but that step seems to be mostly ignored.

David Szent-Györgyi

unread,
Apr 25, 2026, 7:24:15 PM (5 days ago) Apr 25
to leo-editor
On Saturday, April 25, 2026 at 1:43:54 PM UTC-4 tbp1...@gmail.com wrote:
Some important things are getting lost with the rush to AI coding. LLMs don't have any coherent and consistent view of large-scale structure. That's essential for large systems especially. They don't have good reasoning ability. I've had them flail away like any intern. And most of all, correctness and freedom from bugs and errors cannot be achieved by testing alone.

In addition, good specifications are more important than ever for AI-written software  but that step seems to be mostly ignored. 

In September of 1984, with the greatest of interest I read an article in that month's issue of Scientific American, one which describes Eurisko, software that explores a domain of knowledge that is fed into it, given heuristics for scoring results of its exploration. Author Douglas Lenat describes Eurisko as capable of exploring in ways of interest to humans, looking for universal truths and unique cases; it is able to modify the heuristics function as it runs, and it is able to set priorities for cases to explore and for modifying the heuristics, Lenat describes running it for weeks, needing maintenance from time to time, when it decided that modifying the heuristics was of so high a priority that adjusting them impeded its progress on the exploration. 

Lenat's article describes experience teaching Eurisko the rules for construction of fleets of spaceships for battles as used in the role-playing game Traveller and the predictive modeling of combat between fleets of ships, and having Eurisko design a fleet. That fleet won a 1981 tournament against fleets designed by expert human players of the game; its makeup was unlike those of the fleets designed by human players - in short, it found and exploited a corner case in the rules. The corner case was eliminated in the rules to be used for the following year's tournament. Lenat ran Eurisko run on the modified rules, finding another corner case and producing a fleet that won that tournament as well. The operators of the tournament stated that if Lenat entered and won the 1983 tournament they would cease sponsoring it, so Lenat stopped attending the tournament. 

I was in college in 1984, and agreed with friends with whom I shared an interest in computer science that we would like to have a tool like Eurisko. We did not forsee the issues that have arisen in 2026: violations of copyright or copyleft, displacement of human jobholders, risk of the singularity. Eurisko was flawed and required maintenance by a human being. 

AI agents available as I type this in 2026 do not reason. I am in no rush to use such things for tasks beyond the concrete and mechanical.

You are correct in writing that testing alone does not guarantee freedom from bugs and errors. That said, Pydantic and Hypothesis are aids to finding corner cases.  Data validation as offered by Pydantic makes sense to me. The automation of the creation of test cases and simplification of reports of failures offered by Hypothesis do make sense to me. 

David Szent-Györgyi

unread,
Apr 26, 2026, 8:33:35 AM (5 days ago) Apr 26
to leo-editor
I wrote: 

Pydantic and Hypothesis are aids to finding corner cases.  Data validation as offered by Pydantic makes sense to me. The automation of the creation of test cases and simplification of reports of failures offered by Hypothesis do make sense to me. 

I now read that the the rework that produced Pydantic v2 broke the Hypothesis plugin that allowed the use of Hypothesis v5.29.0 and Pydantic v1.81. Pydantic's repository on Github states that Pydantic v1.10 ships with Pydantic V2, and the branch for v1.10-fixes is still receiving some fixes. 


Thomas Passin

unread,
Apr 26, 2026, 10:19:49 AM (5 days ago) Apr 26
to leo-editor
Thanks for posting about  Eurisko. It seems to have passed me by at the time. Fascinating!

David Szent-Györgyi

unread,
Apr 26, 2026, 1:44:27 PM (5 days ago) Apr 26
to leo-editor
On Sunday, April 26, 2026 at 10:19:49 AM UTC-4 tbp1...@gmail.com wrote:
Thanks for posting about  Eurisko. It seems to have passed me by at the time. Fascinating!

I read today that Lenat's follow-on to Eurisko is Cyc, which is described in an article on Wikipedia. It represents knowledge using the formal language CycL. Lenat died in 2023, the articles on Cyc and on Lenat have nothing to say about the future of Cyc. 

Lenat lived long enough to see the beginnings of the current day advance of machine learning in the corporate sphere and public life. Two quotations mentioned in the Wikipeda article on him seem worth remembering in light of that advance: 

“If computers were human, they’d present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it’s on the horizon for home robots. That’s like saying, ‘We have an important job to do, but we’re going to hire dogs and cats to do it.'.'” 
- from 2014

"Sometimes the veneer of intelligence is not enough."
- from 2017

Thomas Passin

unread,
Apr 26, 2026, 10:26:09 PM (4 days ago) Apr 26
to leo-editor
I knew about Cyc, of course. I didn't keep up with it but it didn't seem to go anywhere. The quotations from Lenat seem quite apt to me.
Reply all
Reply to author
Forward
0 new messages