Using Gemini 3 to investigate improving pylint

46 views
Skip to first unread message

Edward K. Ream

unread,
Nov 20, 2025, 5:24:37 AM (11 days ago) Nov 20
to leo-editor
Leo issue #4472 is a hobby project.

I have just asked Gemini 3 to critique my idea and design. This gemini3 thread is the result. It is way more detailed, thoughtful, and knowledgeable than my initial musings.

This sentence captures my initial vague idea exactly:

Your proposal essentially transforms Pylint from an Inference Engine (which is slow and      guess-prone) into a Logic Validator (which is fast and deterministic).

I'm not sure what to do with all this insight, but I am profoundly impressed with Gemini 3. I might ask the pylint devs for their comments. But first, I must provide examples where pylint has failed to detect real errors in Leo's code base. Such errors have been fairly common in the past.

Edward

Thomas Passin

unread,
Nov 20, 2025, 11:26:37 AM (10 days ago) Nov 20
to leo-editor
That transcript certainly impresses me! Are you using a paid version of Gemini?  So far I'm still using free chatbot versions.

Edward K. Ream

unread,
Nov 20, 2025, 11:53:39 AM (10 days ago) Nov 20
to leo-e...@googlegroups.com
On Thu, Nov 20, 2025 at 10:26 AM Thomas Passin <tbp1...@gmail.com> wrote:

That transcript certainly impresses me! Are you using a paid version of Gemini?

I'm using the paid version. And I use the slower "thinking" version. So responses can take 20 seconds or longer. BTW, here is the link to the entire conversation.

And here is the link to the entire conversation about half clones. I think a few more improvements can be made. Check #4478 for the latest developments.

Edward


Edward K. Ream

unread,
Nov 23, 2025, 2:44:08 PM (7 days ago) Nov 23
to leo-editor
On Thursday, November 20, 2025 at 4:24:37 AM UTC-6 Edward K. Ream wrote:

Leo issue #4472 is a hobby project...This sentence [from gemini3] captures my initial vague idea exactly:

Your proposal essentially transforms Pylint from an Inference Engine (which is slow and guess-prone) into a Logic Validator (which is fast and deterministic).

All well and good,  but pylint's --prefer-stubs command-line option may be the best way to help pylint. gemini3 had no part in my thought process, and I have the feeling that gemini3 may have led me astray had I continued the original discussion. In a weird way, AIs seem to have blind spots.

My next step will be to adapt Leo's python importer so that it copies signatures for classes, methods and functions into a .pyi (stub) file. This will be a much easier project than my old make-stub-files script. It should take only a few days.

Edward

Thomas Passin

unread,
Nov 23, 2025, 10:47:34 PM (7 days ago) Nov 23
to leo-editor
On Sunday, November 23, 2025 at 2:44:08 PM UTC-5 Edward K. Ream wrote:
All well and good,  but pylint's --prefer-stubs command-line option may be the best way to help pylint. gemini3 had no part in my thought process, and I have the feeling that gemini3 may have led me astray had I continued the original discussion. In a weird way, AIs seem to have blind spots.

I've similar experiences with both ChatGPT and Claude. And I too have then had new thoughts that seem productive.  It's not really that they have blind spots. It's that a response is constructed based on what has been in the training material in the context of the conversation.  This might be exactly what one wants, such as how to fix a known configuration problem. When one wants something different, something not well represented in the training set, it's not likely to surface unprompted.

Sometimes when I get a new take on the subject, and put it to the chatbot, it immediately sees the point and explains to me why the idea has merit. Which I don't need because I already know why it might be a good idea.  

I think these kind of conversations can help me to get new ideas, so they aren't complete losses.

Edward K. Ream

unread,
Nov 24, 2025, 5:30:33 AM (7 days ago) Nov 24
to leo-editor
On Sunday, November 23, 2025 at 1:44:08 PM UTC-6 Edward K. Ream wrote:

My next step will be to adapt Leo's python importer so that it copies signatures for classes, methods and functions into a .pyi (stub) file.

That would be a big mistake--discovering function signatures is but a small part of the problem.

Instead, the next step will be to play with mypy's stubgen script. I've just spent 15 minutes studying it. It's based on a full analysis of the ast (parse tree) as it must be.

The following appears at the end of the module-level docstring: "Note: The generated stubs should be verified manually." A Leo-specific tool must rely on the generated stubs without any manual changes. The whole project will fail if not, even after adding various Leo-specific hacks. We shall see.

Edward
Reply all
Reply to author
Forward
0 new messages