Hi Florian,
On the examples of "stale" codacy-bot issues not being deleted/resolve automatically - please have a look at the PR I mentioned below. The ones which are manually resolved by me are fixed in the code but were left open by the bot. I've been thinking why it would close some but not others, and one theory I have is that my force-pushing the DCO sign-off could have interfered. It could be that not being able to find the sha it was reviewing before prevented it from detecting that it was fixed correctly.
For the examples of "snowballing" - upon a closer look, it seems that I misinterpreted it a little bit - apologies. It has reported exact same "issues" multiple times, including in the same file, for each occurrence, and the appearance of new comments on PR timeline could have interleaved with my commit fixing them, leading me to believe that it was posting the same "issue" more than once. Or possibly again the pushing of the same commit with different sha for DCO sign-off also played a role. So this point is a little bit inconclusive.
To be clear, I still think that while it could work ok for small commits and bugfixes with < 3 codacy "issues", in general posting a comment for each "issue" occurrence identified by this bot just clogs the PR discussion, and won't be helping actual reviewers either, because they will have trouble even locating their own comments in the mess of the bot messages.
The link to the full report it adds in the checks area is IMO absolutely enough. Reviewers can just ask authors to address all codacy issues they agree with, and then discuss those left. To avoid asking this manually every time, the PR "template" can be amended to include this proposition to the authors. In this context it could be good if a specific occurrence of an issue could be commented on, or "moved to comments" by a click - then the author can explain why they don't want to fix it, or reviewers can bring authors attention to it. But all of them pushed as comments by default is not such a good idea IMO.
On examples of usefulness of the issues - one example is its complaints about package-local visibility (which in my case was totally intentional to allow testing but not "external" usage). It has basically forced me to make those items public, which I don't mind really, but I equally don't understand what was the issue in the first place, and how can this help either readability or anything else.
But to be honest I am not sure if we want to start a discussion of these things here, because it has a potential to become very time consuming without real outcome. Over time the reviewers may reach their own consensus on which "issues" are useful and which are not, and hopefully it will be possible to tune the bot accordingly.
For my PR specifically, I fixed the "issues" which were trivial, even if I didn't see any point in them, and left the rest of them for now. My hope is that when it is going to be actually reviewed by a human, the reviewers will let me know which of the "issues" they want to be fixed in order to approve.
Many thanks,
Dmitry