It was thus said that the Great 'Martin Eden' via lua-l once stated:
>
> On 2025-11-29 10:17, 'Sainan' via lua-l wrote:
> >I'm so fucking tired of this. AI still either just copies text that was
> >already there (maybe rephrashing it a little) or it just straight-up
> >hallcuinates. It had the same problems when it was first introduced over 2
> >years ago. Why is no one learning this?
>
> Well maybe.
>
> I expect some people here have sufficient expertise in Lua internals
> (looking at you Roberto!) to compare computer-generated text with
> their vision. And make their opinions.
>
> Also that's why I used two repos in test that are completely mine
> and mostly unknown.
>
> Scope of documentation is biased based on available code files and
> related text. So yeah, not like we humans really like to perceive it.
>
> Also we can test what will happen when documentation contradicts
> code. But I'm too lazy for this.
^^^^^^^^^^^^^^^^^^^^^^^^^
This does not surprise me. My second experience with LLMs involved
someone too lazy to do a good job with LLMs in an attempt to disprove my
stance that LLMs are not worth the effort. [1] They didn't put in the
effort and toally misunderstood the problem. Funny that.
But I too, decided to try for myself this site. And I submitted two repos
of my own [2]. I'm in the process of wring up my experience with it, but
over all, I'm still not convinced. The first repo is a code base I've been
working on for 25 years; the second one for just a few years.
Overall, not a great showing. Plenty of errors in the "documentation",
some quite subtle unless you know the code well. Some that are just so
egregiously bad that it borders on maliciousness. I also noticed that while
both repos are about the same size lines of code wise (7,400 on one, 9,500
in the other; both in the same order of magnitude) that it was a bit less
wrong on the smaller repo. Two reasons: the smaller is the one I've been
using for 25 years and I've been removing features I don't use for the past
year so it's easier to follow, and the larger has a bit more logic
complexity due to the nature (and I've been working on it for far fewer
years). I suspect that the larger one is just closer to the LLMs token
limit to work effectivly. I'd try it on a larger repo, 155,000 lines of
code written in the early 90s, but I'm not versed enough with the code base
to find errors in the generated "documentation" unfortunately.
I'll leave it to you, the reader, to find the egregiously bad errors in
the repo "documentation" [2]. My hypothosis: no one will (even with the
hint that it's in the second repo) , which to me says this is not a tool
worth using.
-spc (and I suspect this thread will be closed as "off topic" rather
quickly now ... )
[1]
https://boston.conman.org/2025/06/05.1
[2]
https://deepwiki.com/spc476/mod_blog
https://deepwiki.com/spc476/a09