Another year passes...

Skip to first unread message

James T. Sprinkle

May 9, 2022, 8:28:55 PM5/9/22
to minix3
It has been 5 years since the last release candidate came out and almost 8 years since the last stable release.  Is anyone actively working on Minix now?

I have been wondering....

If someone wanted to give it a redo, where would you start? Or, do you scrap the code base and re-imagine the concepts into something like Minix, similar to how Minix is something like Unix? I'm just talking about the kernel to start with... Would you go back to v1 or v2 when it was educational and start from there? v3 at book release? v3 at some point before it started to go towards becoming another BSD variant or after?

Or should we even bother?  So many other hobby OS projects we can spend our time on.  So many other *nix clones.  Seems everybody wants to make their own *nix these days with varying levels of success.

As much as I'd hate to walk away from an old friend after over 20 years, maybe it is time to do so.

Change my mind?

The Grue

Giovanni Falzoni

May 10, 2022, 3:58:52 AM5/10/22
Minix is dead.

Giovanni Falzoni |
Via A.Gramsci, 18/A | <>
20051 Cassina de` Pecchi - Italy |

May 10, 2022, 8:03:27 AM5/10/22
to minix3
Hi James,

To answer your first question: unfortunately, no one is actively working on Minix.  Personally, I've been slowly working towards getting the project back up but real-life priorities have impeded that progress, especially in this last year.  Until someone has enough time, know-how and energy to work on the project, it will not be revived. 

This doesn't mean it cannot be revived (to Giovanni's point), but while there's plenty of interest in the project, actual development help is harder to come by.  I haven't given up on my goal to work on the project but I'm still not sure how soon I'll be able to finally pick it up well enough to contribute.

Now,  hopefully, I'll try to address the other questions:
  • Should we even bother? - I think so.  The name Minix itself has brand name recognition, even if it's fading.  Sadly, microkernels are still unpopular despite their security benefits.  Projects such as HelenOS, Redox and the Mach Kernel keep the torch going but I'm not sure what their level of maturity is with respect to mainstream OSes.  They're probably not to the level to compete with a behemoth such as Linux.
  • That leads me to my next point (And question): if you're redoing the OS, where would you start?  - That depends on your goal.  If you want to build an OS from scratch and/or learn how to write an OS, then you should either start from the beginning (that is, from near-zero) or pick any version of Minix you please.  If you want to compete with modern OSes, then the latest build of Minix is your best bet.  It is probably still the closest version to being a mainstream OS.  (See this post for more technical details.)  There is a *lot* of work to be done I still believe that this build has the least amount of work to get modern 3rd party software to build.
  • Unfortunately, writing OSes is very, very complicated, and so starting a project from scratch gives you the benefit of learning in-depth knowledge of OS development, but there is a steeper hill to climb. There are projects such as SerenityOS and Sortix that are doing just that.  In fact, SerenityOS has reached a decent level of maturity thanks to its solid user base.  Too bad its not microkernel based.
So, to summarize: Minix may be "mostly dead" but its code base is still out there ready to be used or forked, depending on what your OS development goals are.  (I'd recommend that any forks use a different name to avoid confusion.) 

Hope this helps,


Jean-Baptiste Boric

May 10, 2022, 5:18:40 PM5/10/22
to minix3
I have my own rather unorthodox opinions on the topic of OS development. I believe MINIX3 in its current state is a dead-end on multiple fronts.

On the technology side, MINIX3 is a Unix-like, micro-kernel operating system written in C for the i386 and ARMv7-A architectures. Every single part of that sentence is outdated or down-right obsolete:
  • Unix is over 50 years old and was designed to run on a PDP-11. The world has moved on from the PDP-11, but Unix hasn't really moved on from the VAX. Computers nowadays have non-uniform memory access, asymmetric multiprocessing, massively parallel and asynchronous input/output devices... Ambient authority was papered over with namespaces on Linux, forking poorly fits with threading and virtual memory and has been kept on life support through copious amounts of devilishly tricky CoW and over-provisioning shenanigans, the first rule of signals is you don't talk about signals and the POSIX standard still thinks synchronous I/O, poll() and pipelines of raw text are cutting-edge technologies when asynchronous procedure calls, kqueue and Powershell have been around for decades. It has sedimented so much it's probably turning into oil as we speak.
  • While micro-kernels are not an outdated concept, the design and implementation of MINIX3's own micro-kernel definitively is. Hollowing out an 80's Unix kernel to the bones certainly reduces the number of lines of code in the kernel, but the end result is still a bunch of old-timey bones.
  • As with Unix, C is over 50 years old. Unlike Unix, C as a programming language hasn't really evolved all that much. C99 and C2x as languages are made of bits and pieces of absolutely uncontroversial and for the most part anecdotal C++ features while not actually helping the programmer do its job. C++ (at least the modern parts), Rust and Zig just to name a few well-known alternatives aren't just better than C, they offer substantial safety, security, productivity benefits over a programming language designed to be a high-level and portable counterpart to assembly code.
  • i386 (as well as IBM PC-style computers) has been dead for well over a decade and ARMv7-A is only found in low-end System-on-Chips nowadays. Modern x86-64 and Aarch64 processors treat them at best as compatibility modes.
On the teaching side, xv6 seems far more popular than MINIX in this market. It's certainly far easier to wrap one's head around it than MINIX3 and the xv6 handbook is a work of art. Too bad we're still teaching new generations of students that fork is a cool thing...

Another issue is the NetBSD userland. While it has solved the issue of application portability, keeping it up to date from what I've seen has been a real problem. Ultimately, I think melding MINIX with a source-level port of NetBSD's userland was in hindsight a mistake, but one that would've been extremely hard to predict back in 2010. While syscall emulation layers already existed, most well-known examples were mostly about running Linux binaries on BSD, not about mix-and-matching vastly different operating systems like WSL1 for Windows and Starnix for Fuchsia that appeared later on. If MINIX 3.2.0 was instead engineered to essentially run native NetBSD binaries as-is for an userland through syscall emulation, I would expect that keeping up with upstream NetBSD wouldn't have been such a headache. But I certainly don't blame MINIX3 developers back then of either not thinking about that option or thinking it was utterly bonkers.

So, what to do then? Well, I do have opinions and ideas about that too, but it's getting late right now and I've ranted enough for one evening.

Best regards,

joseph turco

May 11, 2022, 9:34:18 PM5/11/22
If Minix gets USB support or a raspberry Pi port it would be a huge boost to garner usage and popularity. I am interested in Minix because I wanted to learn the OS from the ground up alongside learning C. Sadly the only way I can use it is via a VM. Is the book any good?

You received this message because you are subscribed to the Google Groups "minix3" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit

hao jiu

May 14, 2022, 12:21:42 PM5/14/22
to minix3
   forgive my ignorant words,
   minix's latest version is 3.4.0 if I was right, it can run on bbb. you can try if you have that board onhand and like. I'm also intresting at minix3 especially on ARM now.I thought i386 version is outdate now,but ARM board can be easilly found around.maybe its good thing for beginner that minix3 is not being developping forward, beginner can learn minix3's code before it's getting larger or be much different.Occasional I found xv6 now turn to risc-v as new platform.Unfortunately, I thought xv6's not suitable for beginner like ARM.but the latest boot about minix3 is associate with i386, not ARM version. the minix3.4.0 enable arm's mmu,make it more hard to learn.but its ok for me.

Best regards,
jiuhao 在 2022年5月12日 星期四上午9:34:18 [UTC+8] 的信中寫道:

Pablo Pessolani

May 14, 2022, 1:23:00 PM5/14/22
Hi, several months ago I suggested to refactor Minix to be integrated with Linux.
Several years ago I wrote Minix Over Linux as an Userspace Operating system, in a similary way as User Mode Linux.
I also migrated and modified MINIX3 IPC to the Linux Kernel,
I wrote a Minix based Unikernel running over Linux.

I think that is impossible to track every new underlying technology without a huge programmers community with the knowledge, experience and interested required. Therefore, I suggest to adapt Minix to Linux, some components as kernel modules and other as userspace ones.

De: <> en nombre de hao jiu <>
Enviado: sábado, 14 de mayo de 2022 13:21
Para: minix3 <>
Asunto: Re: [minix3] Another year passes...

May 15, 2022, 7:57:17 AM5/15/22
to minix3
Hi jiuhao,

I'd like to respond quickly to your comment by saying that I agree that simpler OSes can better serve as learning platforms for OS development.  This would explain the continuing interest in previous versions of Minix 3. In the long term, I think a more modular approach to OS development (one where components are more isolated and independently developed) might help provide a platform that is both simple yet fully-featured.  This, however, is a very ambitious plan.   I'm not sure what you mean when you talk about xv6 but I know of a few developers who have used xv6 as a learning platform for C and OS development.


May 15, 2022, 8:08:43 AM5/15/22
to minix3

Sorry to sidetrack the discussion a little be more, but I wanted to ask you about the research you listed above.  Unfortunately, I was not aware of this published research until I read your post today.  I searched for your previous post and it did not link to these so I never realized that the proposal was based on existing work.  To that end I was wondering about the following:
  • Are your papers available in a public paper repository such as ArXiv? So far I've only found your 3rd paper here:
  • Is your code publicly available for download and/or cloning via github, gitlab or similar?
  • If your code is publicly available, under what license is it published?  Please note that while Minix is BSD licensed, Linux is GPL and can easily "bleed into" BSD licensed code.



May 15, 2022, 10:51:57 AM5/15/22
to minix3
Hi Jean-Baptiste, et. al. ,

First of all, thank you for contributing your perspective on the matter.  This one's the hardest response to write not just because your position is, as you say "unorthodox" but several of the points you made make sense and are for the most true to one extent or another.  However, one overarching theme I take from your response is that everything is "out of date" and outdated is bad.  I'd like to try to present a somewhat weak counter-argument to that, in bullet-point format no less:
  • Personally, I'm averse to "fad-oriented" programming. To me, modern "bleeding edge" of development has made it a habit of abandoning ages of tried-and-true programming wisdom and convention.  Going into detail would take too long.  In short, on the one hand, you can end up with something novel and pretty cool, but more often than not developers end up re-inventing the wheel, and poorly at that!
    • The most obvious example lies in the realm of User Interfaces which has become a rather polarizing topic at times.  I think it's one reason a project such as Serenity OS has gathered such a following.  Not too long ago I also learned that the IBM Common User Access document was a thing.  There is no such document for modern UI development and I think there needs to be.
  • If designed well, a piece of engineering can be indispensable and "timeless".  Most NASA spacecraft are still chugging along with old chips and code that was written decades ago.  Mainframes and COBOL are still a thing.  We may criticize and make fun of them but they're still out there doing important work!
    • These days Unix and Posix are considered timeless and while it comes with cruft, it also makes existing software more or less compatible but that is changing: less 'old software' compiles in newer compilers for a variety of reasons.  We're imposing that more work be done just to rewrite code in order to simply keep it operating! 
    • I'd rather see more software development hours spent on writing new things that work on top of old things, rather than updating old things to keep them working on top of new things (themselves built on top of old things).  Breaking and deprecating APIs seems like a modern sport.  I can see where it's good when it's done for security purposes, but sometimes it seems changes are made for "economic" reasons...
    • Overall, I think more consideration needs to be taken when it comes to the impact deprecating old code and libraries have on the existing code base.
  • Asynchronous programming is another kind of programming I'm not fond of. People are poor at reasoning about async and it often leads to weird and hard-to-diagnose issues. 
    • I'd love to see a development system that lets the programmer develop synchronous code and the compiler then reasons about the code and then makes it asynchronous (both correctly and efficiently).
  • Personally, I don't consider C++ (even modern C++) to be an easily securable language.  To me, the spec is far too complex to be reasoned about properly (especially compared to C) without ending up with code that produces undefined behavior. 
    • I'm not too familiar with Zig's memory safety guarantees, so I can't say anything there. 
    • So far Rust along with its borrow checker (which as I understand it is actually proof-checker in disguise) seems to be the most appropriate compiler for providing the appropriate security and productivity required in modern programming. The downside from what I understand is that it's hard to learn.  Also interfacing traditional code with Rust makes things harder since you get a mix of 'safe' and 'unsafe' code calling each other.  As I understand it this is still an unresolved problem.
    • There's something to be said about the programmer being able to think about their program in their head properly.  I think that's part of the reason C remains a popular language despite its age and demerits. 
Now, I think all of these points as well as Jean-Baptiste's points center around technical debt:
  • Unix's assumptions that we work on a PDP-11 are definitely technical debt that forces programmers to write under less-than-ideal constraints.  The "end" of Moore's law has made chips transition from faster clocks to increased parallelism and that's forced developers to adapt to a completely different paradigm.  A big part of that problem is that  both the entire software stack (OS, compilers, programming languages) and the way we think and teach about programming have needed to be re-engineered to fit this new paradigm.  This then leads to my earlier comment on async code producting hard-to-debug code if great care isn't taken.
  • Binary compatibility layers facilitate transitioning from old code and platforms into new ones and perhaps we don't leverage these enough.  Part of the problem is that they can be hard to maintain: standards change over time and there aren't necessarily hard and fast divisions between incompatible versions of platforms.  Switching CPU platforms compounds this problem as it can be difficult to efficiently emulate binaries running on an architecturally dissimilar platform.  Perhaps this is changing: how efficiently does x64 code run on the new apple ARM chips?
  • Yes! The Minix kernel needs some TLC, perhaps even an overhaul!  For years I've wondered if the microkernel style OS can't be turned into a plug-and-play kind of architecture: you pick your favorite microkernel, pair it with your favorite set of services and pair them with your favorite user space.  Easier said than done, but I don't think it's impossible.  Perhaps such a model can only be implemented using a platform made from scratch and a modern programming language (or a programming platform that has yet to be invented).

In the end, I think the real issue is one of manpower (and ultimately money): whatever ideas are worked on, (whether it's a brand-spanking-new platform, wearing overalls while working on decades-old code, and everything in between), require people interested, involved and invested in the idea that have both the experience and time to devote to said project.  Regardless of the direction this project takes in the future, currently we are lacking such resources despite the ongoing interest.


Jean-Baptiste Boric

May 16, 2022, 11:12:59 AM5/16/22
to minix3
Hi stux,

I've probably not emphasized enough the other side of my opinion, the pragmatic/practical one. I'll develop further.

On bleeding-edge vs. "withered" technology (to cite Gunpei Yokoi), my approach is as follows. Obsolete technology, that is technology that has been superseded by safer alternatives, should be deprecated within a practical timeframe to be preserved as possibly-living historical artifacts. Old technology, that is technology that has been superseded by merely more efficient alternatives, should be maintained if practical and preserved if not. Modern technology should be embraced as long as it is safe and ready for use.

With this, I deem C to be obsolete technology because much, MUCH safer alternatives exist. It's not that C doesn't work in an absolute sense, but it is legendary hard to write correct code with it, especially regarding integer overflow and memory safety. This translates to countless bugs, notably security bugs in critical software components. Taking Zig for example, a programming language specifically designed as a competitor and replacement to C:
  • Mandates compilation errors on compile-time and traps (in safety-enabled builds) on run-time undefined behavior such as integer overflow, instead of leaving compilers carte blanche to do whatever the hell they want at the first opportunity.
  • Provides built-in overflow-checking arithmetic functions and dedicated overflow arithmetic operations, instead of assuming the programmer knows how to write overflow checks correctly every time they are needed.
  • Uses arbitrary-precision compilation-time integer literals (comptime_int), instead of assuming every integer constant is an int unless prefixed otherwise and silently truncating them when they don't fit.
  • Requires explicit casting on every unsafe type coercion, instead of silent casts and an one-size-fits-all escape hatch.
  • Optional types, instead of trusting the programmer to never ever forget a C pointer NULL check.
  • Has real arrays and slices with enforced lengths (plus sentinel-delimited arrays mostly to safely interface with C strings), instead of raw C pointers with no additional semantics.
  • Has reasonably powerful metaprogramming capabilities that enables for example compile-time, type-checked string formatting, instead of free-for-all varargs with compiler-dependent warnings for printf-style functions.
  • Extensive built-in testing capabilities leveraging said metaprogramming, instead of leaving that as an afterthought.
  • Promotes giving allocators to functions allocating memory dynamically, instead of leaving programmers to implicitly rely on malloc/free or something else.
  • While it doesn't provide a borrow checker nor RAII by choice, it does provide defer statements, instead of hoping programmers covered all control flow cases for freeing resources.
There's only so much that can be papered over with code reviews, static analysis, fuzzing, undefined behavior sanitizers and third-party libraries. C is kinda like asbestos in a sense: while it works and it's safe as long as the wall is sealed, it's everywhere and we should really stop adding more of it. Except that hackers everywhere around the world are hammering every publicly-exposed, juicy-looking wall looking for holes and most are made of damp drywalls instead of reinforced concrete. C is simply no longer fit for purpose in security-sensitive contexts (looking at you, OpenSSL), let alone inside trusted computing bases.

On asynchronous programming, not every single program needs to have asynchronous operations or asynchronous I/O. However, most software stacks remotely concerned about throughput should probably be engineered to be asynchronous for scalability reasons. It's easier to turn an asynchronous operation into a synchronous one (initiate it and then wait for it) than the opposite (run it on a separate thread and somehow synchronize with it), or to have multiple ongoing asynchronous operations than synchronous ones. This assumes that programming languages and libraries provide good and easy to use asynchronous support, which C and Unix/POSIX APIs historically do an exceptionally bad job at it.

On compatibility layers, source-level and binary-level approaches are both similar in that they enable software made for one platform to run on another, what they differ on is if new binaries need to be compiled or if existing binary artifacts can be reused as-is. What I observe is that MINIX3's approach for the NetBSD userland ultimately requires a lot of maintenance to keep it up-to-date with the upstream. Therefore, a new approach is needed to solve the maintainability issue for the userland, of which a NetBSD binary syscall emulation is but one option among others. Ideally, as a micro-kernel operating system we shouldn't depend on any specific userland ; the way NetBSD has been deeply integrated with MINIX3 prevents the usage of any alternatives without a massive engineering effort.

On capability-based systems, I've already explained that Unix's pervasive reliance on ambient authority and global namespaces is problematic, but it doesn't stop there. Traditional user-based access permissions for example are fine for isolating untrusted users from each other, but offers little help for isolating users from untrusted applications. Downloading random applications off the Internet is a reality that predates the smartphone era by decades ; saying that the operating system itself won't get compromised because the user doesn't have administrative right isn't helpful when the user account holds a person's entire digital life one program invocation away from disaster. The trusted computing base of a cranky old Unix sysadmin is not the same as the trusted computing base of an average end-user.

A capability-based system can also help with componentization of software. Fuchsia's overall application model is an example of this, but their driver runtime model in particular allows to either collocate drivers within a process for performance or to isolate them across process boundaries transparently for security. This basically allows for policy-driven instead of architecturally-mandated privilege separation of device drivers, but the idea could be extended to all applications and services on a system ; just like threads in user-space, processes don't have to be exclusively a kernel construct. Ideally, we wouldn't need to choose one architecture ahead of time between single address space unikernels, monolithic kernels and micro-kernels, but instead mix-and-match as policy dictates and user desires.

So, what to do then? Well, I still haven't actually written an actionable proposal and I've ranted enough again for one evening. Hopefully my personal stance on operating system architecture has been clarified a bit, but clearly its practical incarnation would not be a direct continuation of MINIX3's current source tree. Heck, I'm not even sure the end result could really be called MINIX4 : while I don't think it would be incompatible with MINIX3's currently stated goals, it is at the very least a huge change in execution. It also raises the question about how much, if any, would existing MINIX3 source code be reused (the answer doesn't have to be none).

Finally, on the topic of manpower: I totally agree. I've just started to work in my spare time on decompiling an old PlayStation video game mostly because I can and it's fun to me. I'm not currently motivated to work on MINIX3 as it currently stands because I'm not interested by it for reasons I've developed in an absurd amount of detail. Would I be interested to work on something that is not quite MINIX3? Maybe, maybe not, "not quite MINIX3" is quite vague after all...

Best regards,


Jul 16, 2022, 8:35:44 AM7/16/22
to minix3
The problem with Zig is that it is different from C. It doesn't have the same familiarity. It may do cooler things, but what about using it? As someone who has been using C and C-like languages for a long time, Zig doesn't seem to be a friendly language. Rather, I'd prefer to use C than have to reason about in a language completely alien to me. I do not need to test my knowledge of a programming language as I code in surplus to the implementation effort. Also, if we have to 'upgrade' to a new language, why not use one that is just C but with the safeguards added, there are numerous programming languages that achieve what Zig does but which are perhaps lesser-known. In the end, to me, switching doesn't make sense as most of the code is written in C and it would be more effort to rewrite all of that than just continuing on the same tract.

Now, you would certainly argue that we can keep the old code base and uses Zig's ability to interface with C and act as a C compiler to progressively move to Zig. As I said previously, there are numerous other languages that can very easily interface with C (in fact, it seems to be a popular feature) and secondly, being able to compile it is also nothing new, so considerations should be given primarily to languages which are closer to C in design and not Zig nor other Rust-like languages like V and Vale. In the end, the best language to interface and compile with C is C itself and not some current trend. All languages grow old and new languages spawn every day of the week. It cannot be wise to be constantly moving to a new language simply because it is supposed to be better; it might be better designed and yet still not be worth it for different reasons.

There's also the question of whether a language will last. Even with proper backing, some languages just die. Gnome's Vala language for example, which was supposed to be a better C++ and a replacement for C eventually became moribund and goes almost unused today save for some very notable exceptions. There's also the infamous example of CoffeeScript, which is a well known story that doesn't need to be retold here. Those languages, just like Zig, were initially very popular and used for some pretty ambitious projects. To me, Zig (and many others I might add) looks just like that. Another of those. I guess my point is that new programming languages usually die before old programming languages do. While old programming languages continue to evolve and receive updates, new ones end up stagnating or making only minuscule progress until they become completely irrelevant. Old languages don't have that kind of risk.
Reply all
Reply to author
0 new messages