Usually zsh does fine wrt tab-completion, but sometimes you just get nothingwhen pressing tab, either due to somewhat-broken completer or it working asintended but there's seemingly being "nothing" to complete.
Because in vast majority of cases, completion should use files, except forcommands as the first thing on the line, and maybe some other stuff way more rarely,almost as an exception.But completing nothing at all seems like an obvious bug to me,as if I wanted nothing, wouldn't have pressed the damn tab key in the first place.
If using that becomes a habit everytime one needs files, that'd be a good solution,but I still use generic "tab" by default, and expect file-completion from it in most cases,so why not have it fallback to file-completion if whatever special thing zsh hasotherwise fails - i.e. suggest files/paths instead of nothing.
Looking at _complete_debug output (can be bound/used instead of tab-completion),it's easy to find where _main_complete dispatcher picks completer script,and that there is apparently no way to define fallback of any kind there, but easyenough to patch one in, at least.
Where _complete _ignored is the default completer-chain, and will trywhatever zsh has for the command first, and then if those return nothing,instead of being satisfied with that, patched-in continue will keep goingand run next completer, which is _files in this case.
A patch with generous context is to find the right place and bail if upstreamcode changes, but otherwise, whenever first running the shell as root,fix the issue until next zsh package update (and then patch will run/fix it again).
Doubt it'd make sense upstream in this form, as presumably current behavior islocked-in over years, but an option for something like this would've been nice.I'm content with a hack for now though, it works too.
Some days ago, I've randomly noticed that github stopped rendering longrst (as in reStructuredText) README files on its repository pages.Happened in a couple repos, with no warning or anything, it just said "documenttakes too long to preview" below the list of files, with a link to view raw .rst file.
Sadly that's not the only issue with rst rendering, as codeberg (and prettysure its Gitea / Forgejo base apps) had issues with some syntax there as well -didn't make any repo links correctly, didn't render table of contents, missedindented references for links, etc.
So thought to fix all that by converting these few long .rst READMEs to .md (markdown),which does indeed fix all issues above, as it's a much more popular format nowadays,and apparently well-tested and working fine in at least those git-forges.
One nice thing about rst however, is that it has one specification and areference implementation of tools to parse/generate its syntax -python docutils - which can be used to go over .rst file in a strictmanner and point out all syntax errors in it (rst2html does it nicely).
Good example of such errors that always gets me, is using links in the text witha reference-style URLs for those below (instead of inlining them), to avoidmaking plaintext really ugly, unreadable and hard to edit due to giantmostly-useless URLs in middle of it.
You have to remember to put all those references in, ideally not leave anyunused ones around, and then keep them matched to tags in the text precisely,down to every single letter, which of course doesn't really work with typingstuff out by hand without some kind of machine-checks.
And then also, for git repo documentation specifically, all these links shouldpoint to files in the repo properly, and those get renamed, moved and removedoften enough to be a constant problem as well.
Proper static-HTML doc-site generation tools like mkdocs (or itspopular mkdocs-material fork) do some checking for issues like that(though confusingly not nearly enough), but require a bit of setup,with configuration and whole venv for them, which doesn't seem verypractical for a quick README.md syntax check in every random repo.
MD linters apparently go the other way and check various garbage metrics likewhether plaintext conforms to some style, while also (confusingly!) often notchecking basic crap like whether it actually works as a markdown format.
But looking at md linters a few times now, couldn't find any that do it nicelythat I can use, so ended up writing my own one - markdown-checks tool - todetect all of the above problems with links in .md files, and some related quirks:
ReST also has a nice .. contents:: feature that automatically renders Table of Contentsfrom all document headers, quite like mkdocs does for its sidebars, but afaik basicmarkdown does not have that, and maintaining that thing with all-working links manually,without any kind of validation, is pretty much impossible in particular,and yet absolutely required for large enough documents with a non-autogenerated ToC.
So one interesting extra thing that I found needing to implement there was for scriptto automatically (with -a/--add-anchors option) insert/update anchor-tags before every header,because otherwise internal links within document are impossible to maintain either -github makes hashtag-links from headers according to its own inscrutable logic,gitlab/codeberg do their own thing, and there's no standard for any of that(which is a historical problem with .md in general - poor ad-hoc standards onvarious features, while .rst has internal links in its spec).
Thus making/maintaining table-of-contents kinda requires stable internal links andvalidating that they're all still there, and ideally that all headers have suchinternal link as well, i.e. new stuff isn't missing in the ToC section at the top.
Script addresses both parts by adding/updating those anchor-tags, and havingthem in the .md file itself indeed makes all internal hashtag-links "stable"and renderer-independent - you point to a name= set within the file, not guessat what name github or whatever platform generates in its html at the moment(which inevitably won't match, so kinda useless that way too).And those are easily validated as well, since both anchor and link pointing toit are in the file, so any mismatches are detected and reported.
I was also thinking about generating the table-of-contents section itself,same as it's done in rst, for which surely many tools exist already,but as long as it stays correct and checked for not missing anything,there's not much reason to bother - editing it manually allows for much greaterflexibility, and it's not long enough for that to be any significant amountof work, either to make initially or add/remove a link there occasionally.
With all these checks for wobbly syntax bits in place, markdown READMEsseem to be as tidy, strict and manageable as rst ones. Both formats have roughfeature parity for such simple purposes, but .md is definitely only one withgood-enough support on public code-forge sites, so a better option for public docs atm.
Earlier, as I was setting-up filtering for ca-certificates on a host runninga bunch of systemd-nspawn containers (similar to LXC), simplest way to handleconfiguration across all those consistently seem to be just rsyncing filteredp11-kit bundle into them, and running (distro-specific) update-ca-trust there,to easily have same expected CA roots across them all.
But since these are mutable full-rootfs multi-app containers with init (systemd)in them, they update their filesystems separately, and routine package updateswill overwrite cert bundles in /usr/share/, so they'd have to be rsynced againafter that happens.
Where fatrace in this case is used to report all write, delete, create andrename-in/out events for files and directories (that weird "-f WD+" mask),as it promptly does.It's useful to see what apps might abuse SSD/NVME writes, more generallyto understand what's going on with filesystem under some load, which appis to blame for that and where it happens, or as a debugging/monitoring tool.
But also if you want to rsync/update files after they get changed under somedirs recursively, it's an awesome tool for that as well.With container updates above, can monitor /var/lib/machines fs, and it'll reportwhen anything in /usr/share/ca-certificates/trust-source/ getschanged under it, which is when aforementioned rsync hook should run again forthat container/path.
And runs commands depending on regexp (PCRE) matches on whatever input getspiped into it, passing regexp-match through into via env, with sane debouncing delays,deduplication, config reloads, tiny mem footprint and other proper-daemon stuff.Can also setup its pipe without shell, for an easy ExecStart=run_cmd_pipe rcp.conf-- fatrace -cf WD+ systemd.service configuration.
Having this running for a bit now, and bumping into other container-relatedtasks, realized how it's useful for a lot of things even more generally,especially when multiple containers need to send some changes to host.
For example, if a bunch of containers should have custom network interfacesbridged between them (in a root netns), which e.g. systemd.nspawn Zone=doesn't adequately handle - just add whatever customVirtualEthernetExtra=vx-br-containerA:vx-br into container, have a scriptthat sets-up those interfaces in those "touch" or create a file when it's done,and then run host-script for that event, to handle bridging on the other side:
Can be streamlined for any types and paths of containers themselves(incl. LXC and OCI app-containers like docker/podman) by bind-mountingdedicated filesystem/volume into those to pass such event-files around there,kinda like it's done in systemd with its agent plug-ins, e.g. for handlingpassword inputs, so not really a novel idea either.systemd.path units can also handle simpler non-recursive "this one file changed" events.
Alternative with such shared filesystem can be to use any other IPC mechanisms,like append/tail file, fcntl locks, fifos or unix sockets, and tbf run_cmd_pipe.nimcan handle all those too, by running e.g. tail -F shared.log instead of fatrace,but latter is way more convenient on the host side, and can act on incidental orout-of-control events (like pkg-mangler doing its thing in the initial ca-certs use-case).
c80f0f1006