I've probably not emphasized enough the other side of my opinion, the pragmatic/practical one. I'll develop further.
On bleeding-edge vs. "withered" technology (to cite Gunpei Yokoi), my approach is as follows. Obsolete technology, that is technology that has been superseded by safer alternatives, should be deprecated within a practical timeframe to be preserved as possibly-living historical artifacts. Old technology, that is technology that has been superseded by merely more efficient alternatives, should be maintained if practical and preserved if not. Modern technology should be embraced as long as it is safe and ready for use.
With this, I deem C to be obsolete technology because much, MUCH safer alternatives exist. It's not that C doesn't work in an absolute sense, but it is legendary hard to write correct code with it, especially regarding integer overflow and memory safety. This translates to countless bugs, notably security bugs in critical software components. Taking Zig for example, a programming language specifically designed as a competitor and replacement to C:
- Mandates compilation errors on compile-time and traps (in safety-enabled builds) on run-time undefined behavior such as integer overflow, instead of leaving compilers carte blanche to do whatever the hell they want at the first opportunity.
- Provides built-in overflow-checking arithmetic functions and dedicated overflow arithmetic operations, instead of assuming the programmer knows how to write overflow checks correctly every time they are needed.
- Uses arbitrary-precision compilation-time integer literals (comptime_int), instead of assuming every integer constant is an int unless prefixed otherwise and silently truncating them when they don't fit.
- Requires explicit casting on every unsafe type coercion, instead of silent casts and an one-size-fits-all escape hatch.
- Optional types, instead of trusting the programmer to never ever forget a C pointer NULL check.
- Has real arrays and slices with enforced lengths (plus sentinel-delimited arrays mostly to safely interface with C strings), instead of raw C pointers with no additional semantics.
- Has reasonably powerful metaprogramming capabilities that enables for example compile-time, type-checked string formatting, instead of free-for-all varargs with compiler-dependent warnings for printf-style functions.
- Extensive built-in testing capabilities leveraging said metaprogramming, instead of leaving that as an afterthought.
- Promotes giving allocators to functions allocating memory dynamically, instead of leaving programmers to implicitly rely on malloc/free or something else.
- While it doesn't provide a borrow checker nor RAII by choice, it does provide defer statements, instead of hoping programmers covered all control flow cases for freeing resources.
There's only so much that can be papered over with code reviews, static analysis, fuzzing, undefined behavior sanitizers and third-party libraries. C is kinda like asbestos in a sense: while it works and it's safe as long as the wall is sealed, it's everywhere and we should really stop adding more of it. Except that hackers everywhere around the world are hammering every publicly-exposed, juicy-looking wall looking for holes and most are made of damp drywalls instead of reinforced concrete. C is simply no longer fit for purpose in security-sensitive contexts (looking at you, OpenSSL), let alone inside trusted computing bases.
On asynchronous programming, not every single program needs to have asynchronous operations or asynchronous I/O. However, most software stacks remotely concerned about throughput should probably be engineered to be asynchronous for scalability reasons. It's easier to turn an asynchronous operation into a synchronous one (initiate it and then wait for it) than the opposite (run it on a separate thread and somehow synchronize with it), or to have multiple ongoing asynchronous operations than synchronous ones. This assumes that programming languages and libraries provide good and easy to use asynchronous support, which C and Unix/POSIX APIs historically do an exceptionally bad job at it.
On compatibility layers, source-level and binary-level approaches are both similar in that they enable software made for one platform to run on another, what they differ on is if new binaries need to be compiled or if existing binary artifacts can be reused as-is. What I observe is that MINIX3's approach for the NetBSD userland ultimately requires a lot of maintenance to keep it up-to-date with the upstream. Therefore, a new approach is needed to solve the maintainability issue for the userland, of which a NetBSD binary syscall emulation is but one option among others. Ideally, as a micro-kernel operating system we shouldn't depend on any specific userland ; the way NetBSD has been deeply integrated with MINIX3 prevents the usage of any alternatives without a massive engineering effort.
On capability-based systems, I've already explained that Unix's pervasive reliance on ambient authority and global namespaces is problematic, but it doesn't stop there. Traditional user-based access permissions for example are fine for isolating untrusted users from each other, but offers little help for isolating users from untrusted applications. Downloading random applications off the Internet is a reality that predates the smartphone era by decades ; saying that the operating system itself won't get compromised because the user doesn't have administrative right isn't helpful when the user account holds a person's entire digital life one program invocation away from disaster. The trusted computing base of a cranky old Unix sysadmin is not the same as the trusted computing base of an average end-user.
A capability-based system can also help with componentization of software. Fuchsia's overall application model is an example of this, but their driver runtime model in particular allows to either collocate drivers within a process for performance or to isolate them across process boundaries transparently for security. This basically allows for policy-driven instead of architecturally-mandated privilege separation of device drivers, but the idea could be extended to all applications and services on a system ; just like threads in user-space, processes don't have to be exclusively a kernel construct. Ideally, we wouldn't need to choose one architecture ahead of time between single address space unikernels, monolithic kernels and micro-kernels, but instead mix-and-match as policy dictates and user desires.
So, what to do then? Well, I still haven't actually written an actionable proposal and I've ranted enough again for one evening. Hopefully my personal stance on operating system architecture has been clarified a bit, but clearly its practical incarnation would not be a direct continuation of MINIX3's current source tree. Heck, I'm not even sure the end result could really be called MINIX4 : while I don't think it would be incompatible with MINIX3's currently stated goals, it is at the very least a huge change in execution. It also raises the question about how much, if any, would existing MINIX3 source code be reused (the answer doesn't have to be none).
Finally, on the topic of manpower: I totally agree. I've just started to work in my spare time on decompiling an old PlayStation video game mostly because I can and it's fun to me. I'm not currently motivated to work on MINIX3 as it currently stands because I'm not interested by it for reasons I've developed in an absurd amount of detail. Would I be interested to work on something that is not quite MINIX3? Maybe, maybe not, "not quite MINIX3" is quite vague after all...