All,
While building out these now 60+ Simple Eiffel libraries, I watched as Claude ran into the same problem even human programmers hit — a lot. The problem is with naming features and semantic framing in language, most specifically name context biases.
In my theology work, this is huge and in software (which is also highly language driven) it is equally huge for nearly the same reasons (on the human side of the equation).
I watched as Claude struggled. It did was programmers do: "I am calling this Supplier for feature X and I am not finding what I need!" Why? Well, beyond the reality that feature X may not exist on the Supplier, there is the reality that feature X might be there, but simply not called what the programmer in their own cognitive semantic frame thinks it should be called. Thus, the programmer is looking for their own semantically framed "feature Y" and the Supplier has what they need, but the semantic context of the writer of the Supplier feature said, "Let's call it feature X." And so it was.
Therefore, the programmer of the Client caller is thinking Y with a semantic bias, while what he needs is feature X, which does what he wants in Y, but it's not named Y. So, he misses it. Perhaps this drives him to miss a reuse opportunity and he writes his own Y feature directly in his Client code.
Eiffel has a great language construct that allows a feature to be given a list of feature names for a single feature body (do, once, attribute). Because I know this, I had a lengthy discussion with Claude about the matter in an Eiffel context and in a Simple Eiffel library context, specifically. The end result is a design report and document that you can find
here. This report details what we talked about, concluded, and used as guidance to perform a refactor of the ENTIRE Simple Eiffel library universe (60+ libraries).
As I write this, Claude (with my help, watchful eye, and so on) is about 1/3rd of the way complete in the task. There is a TON of code and this matter is non-trivial — even for a blazing fast AI helper. So, we're motoring through. So far, so good. Everything compiles, all the testing passes, nothing is broken, and opening up the code reveals that Claude is doing just what we talked about and planned for so extensively (see the document linked above).
I will report back and let you know how it goes. This is one of those incredibly boring refactoring stints. Even at hyper warp-drive AI speeds, the process is rather mind-numbing. However, so far, the results look really promising. You will get hints of that in the report above.
Best,
Larry