No comment will be construed as no objection.
.... But I don't know how big a problem the codecov issue is ...
Still, "untested is broken", right?
There are ways to "bloat" the set of doctests with minimal impact. For example, we could create a file "TESTS.py" (for example) in a Sage module, consisting only of doctests. It would not be included in the reference manual, not visible when someone does "X.my_favorite_method?" or "X.my_favorite_method??", and since it's a separate file, many developers wouldn't interact with it at all. There may already be some files like that in the Sage library.
I don't know if this approach is worth it, but it does provide a way to add more doctests with minimal impact on most users and developers.
(3) We keep our current practice (add doctests for major functionality, and later doctests are added for broken cases). We change codecov/path to report but not fail.
It seems we have to choose between:(1) We keep the status quo: not testing every code path created in the PR results in the PR check failure.(2) we keep codecov/patch as it is, but require to add doctests for every code path created in the PR.(3) We keep our current practice (add doctests for major functionality, and later doctests are added for broken cases). We change codecov/path to report but not fail.I propose to take (3) .