Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

is there a version of unix written in Ada

929 views
Skip to first unread message

gdo...@gmail.com

unread,
Jul 24, 2020, 6:11:49 PM7/24/20
to
is there a unix like OS written completely in Ada?

mockturtle

unread,
Jul 25, 2020, 3:37:25 AM7/25/20
to
On Saturday, July 25, 2020 at 12:11:49 AM UTC+2, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?
I remember someone was writing an OS in Ada, but I do not remember who was, nor the name of the project, nor if it was unix-ish.

I'm not much help, I'm afraid (but maybe I'll trigger someone else... ;-)

Niklas Holsti

unread,
Jul 25, 2020, 4:47:37 AM7/25/20
to
On 2020-07-25 1:11, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?


The short answer is "no".

There have certainly been operating systems written in Ada -- the OS for
the Nokia MPS-10 minicomputer is an example.

There are several real-time kernels and similar low-level SW components
written in Ada, but probably they do not qualify as "Unix-like",
depending on what you mean by that term.

Why do you ask?

--
Niklas Holsti
niklas holsti tidorum fi
. @ .

Stéphane Rivière

unread,
Jul 25, 2020, 5:37:01 AM7/25/20
to
See OS section of https://github.com/ohenley/awesome-ada

> There have certainly been operating systems written in Ada -- the OS for
> the Nokia MPS-10 minicomputer is an example.

Wasn't aware, thanks ! Find that... Very few refs on the net...

https://dl.acm.org/doi/abs/10.1145/989798.989799

--
Be Seeing You
Number Six

Luke A. Guest

unread,
Jul 25, 2020, 6:43:01 AM7/25/20
to
On 24/07/2020 23:11, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?
>

You implement this:


https://en.wikipedia.org/wiki/Xv6

Starting with this:

https://wiki.osdev.org/Ada_Bare_bones

But be prepared to break into C for some bits, i.e. varargs.

Jesper Quorning

unread,
Jul 25, 2020, 10:43:16 AM7/25/20
to
lørdag den 25. juli 2020 kl. 00.11.49 UTC+2 skrev gdo...@gmail.com:
> is there a unix like OS written completely in Ada?

Do not know if it is unix-like, but this [1] looks active. Maybe he needs help..

My own dream was to port GNU/Hurd to Ada while renaming it to something not hurding so much.

[1] https://github.com/ajxs/cxos


/j

Andreas ZEURCHER

unread,
Jul 25, 2020, 3:20:27 PM7/25/20
to
On Friday, July 24, 2020 at 5:11:49 PM UTC-5, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?

In 1981, there in fact was one that had 2 public releases with work in progress on Version 3: iMAX-432, depending on how puritanical one wishes to be about what is or is not Unix-like. (iMAX-432 was far more Unix-like than, say, MVS-like or CP/M-like.)

If anyone has an inside negotiating track at Intel (or the contracting firm that Intel hired to develop it), perhaps they would be willing open-source the old iMAX432 operating system that was released for the iAPX432 processor that was designed from the ground up to have an Ada-centric instruction set. Although it was more Multics-esque than Unix-esque* and although it was written specifically for the iAPX432 (and thus had much iAPX432-only assembly language), it should be relatively easily transliterable into other ISAs because the iAPX432 ISA more closely resembles Java bytecode, LLVM bitcode, and C# CIL/MSIL than other rudimentary machine codes of that era, due to being object-based/OO-lite in the hardware's machine code (which is what doomed the iAPX432 in the early 1980s: it was so complex that it required 3 separate IC dies in 3 separate ceramic packages, and it ran relatively hot).

* Conversely, both Multics & our modern Unix are nowadays birds of the same feather despite the multi-decade dislocation in time from each other, due to both having:
1) multiple threads per address space;
2) multiple DLLs per address-space;
3) multiple memory-mapped files (i.e., mmap(2) in Unixes versus snapping segment-files in Multics);
4) IPC based on multiple threads or multiple processes pending on a single message-queue;
5) soft real-time thread scheduling priorities in addition to time-sharing scheduling priorities;
and
6) a GNU-esque long-form whole-words and short-form abbreviated-letters of each hyphenated command-line flag
are birds of much the same feather, as opposed to 1970s-era spartan Unix that abhorred all of these multiplicities, hence AT&T's uni-based name in AT&T's 1970-divorce-from-MIT's/GE's/AT&T's/Honeywell's-Project-MAC in defiance of Project MAC's multi-based name, because the tongue-in-cheek humor of Unix's name as eunuchs is Multics castrated. Eschewing singleton this and singleton that, Unix nowadays is no longer a castrated eunuch, due to reintroducing a cousin-like variant of nearly every multiplicity feature of Multics other than the multiple rings (unless one counts VM hypervisors nowadays as reintroducing a cousin of that one too).

https://en.wikipedia.org/wiki/IMAX_432

Stéphane Rivière

unread,
Jul 26, 2020, 3:45:54 PM7/26/20
to

> I remember someone was writing an OS in Ada, but I do not remember who was, nor the name of the project, nor if it was unix-ish.

In the very old archive https://stef.genesix.org/aide/aide-src-1.04.zip
you will find :

- The last RTEMS 3.2.1 Ada sources (yes... old RTEMS releases are
offered in two flavors : Ada and C ) comes with docs & manuals.

- the Ada sos-os ada series (based from edu-os in C)

Jeffrey R. Carter

unread,
Jul 26, 2020, 6:17:10 PM7/26/20
to
On 7/26/20 9:45 PM, Stéphane Rivière wrote:
>
> - The last RTEMS 3.2.1 Ada sources (yes... old RTEMS releases are
> offered in two flavors : Ada and C ) comes with docs & manuals.

Marte OS implements Minimal Real-Time POSIX.13 in Ada, so it should be Unix-like.

https://marte.unican.es/

The same group recently announed M2OS, which is also in Ada, but not Unix-like.

https://m2os.unican.es/

--
Jeff Carter
"Drown in a vat of whiskey. Death, where is thy sting?"
Never Give a Sucker an Even Break
106

Stéphane Rivière

unread,
Jul 27, 2020, 3:40:07 AM7/27/20
to

> The same group recently announed M2OS, which is also in Ada, but not
> Unix-like.
>
> https://m2os.unican.es/

Not aware of that, Thanks Jeffrey

I will test that, the Toolchain isLinux based and include GDB...

Stéphane Rivière

unread,
Jul 27, 2020, 3:40:07 AM7/27/20
to
> In 1981, there in fact was one that had 2 public releases with work in progress on Version 3: iMAX-432, depending on how puritanical one wishes to be about what is or is not Unix-like. (iMAX-432 was far more Unix-like than, say, MVS-like or CP/M-like.)

Very interesting Adreas, thanks for this part of Ada and CPU history...

nobody in particular

unread,
Jul 27, 2020, 10:58:37 AM7/27/20
to
On 25/07/2020 19:20, Andreas ZEURCHER wrote:
> On Friday, July 24, 2020 at 5:11:49 PM UTC-5, gdo...@gmail.com wrote:
>> is there a unix like OS written completely in Ada?
>
> In 1981, there in fact was one that had 2 public releases with work in progress on Version 3: iMAX-432, depending on how puritanical one wishes to be about what is or is not Unix-like. (iMAX-432 was far more Unix-like than, say, MVS-like or CP/M-like.)
>
> If anyone has an inside negotiating track at Intel (or the contracting firm that Intel hired to develop it), perhaps they would be willing open-source the old iMAX432 operating system that was released for the iAPX432 processor that was designed from the ground up to have an Ada-centric instruction set.

I guess it could be worthwile contacting Steve Lionel who recently
retired from Intel after working for DEC, COMPAQ, HP, on Fortran
compilers. He has a blog site, I'll not post the details here so as not
to encourage automated spam. Doctor Fortran is his nickname.

nobody in particular

unread,
Jul 27, 2020, 11:00:53 AM7/27/20
to
On 27/07/2020 07:40, Stéphane Rivière wrote:
>> In 1981, there in fact was one that had 2 public releases with work in progress on Version 3: iMAX-432, depending on how puritanical one wishes to be about what is or is not Unix-like. (iMAX-432 was far more Unix-like than, say, MVS-like or CP/M-like.)
>
> Very interesting Adreas, thanks for this part of Ada and CPU history...

Have you seen the iment site of Mary Van Deusen? It is full of Ada
history, wonderful site.

Shark8

unread,
Jul 27, 2020, 4:28:50 PM7/27/20
to
On Friday, July 24, 2020 at 4:11:49 PM UTC-6, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?

Not that I'm aware of.
But here's an even better question: if Unix derives many of its warts from C, then what would an OS developed with Ada look like? -- There were several references I found to an "Army Secure Operating System" which was an implementation of the "Orange Book" requirements of a secure operating system.

IMO, Unix and C have put the industry back decades. / If OSes interest you, please, study and take inspiration from non-Unix OSes.

DrPi

unread,
Jul 28, 2020, 9:08:14 AM7/28/20
to
Le 27/07/2020 à 00:15, Jeffrey R. Carter a écrit :
> On 7/26/20 9:45 PM, Stéphane Rivière wrote:
>>
>> - The last RTEMS 3.2.1 Ada sources (yes... old RTEMS releases are
>> offered in two flavors : Ada and C ) comes with docs & manuals.
>
> Marte OS implements Minimal Real-Time POSIX.13 in Ada, so it should be
> Unix-like.
>
> https://marte.unican.es/
>
> The same group recently announed M2OS, which is also in Ada, but not
> Unix-like.
>
> https://m2os.unican.es/

Ada has embedded thread scheduling. What's the usefulness of m2os ?


Simon Wright

unread,
Jul 28, 2020, 12:48:49 PM7/28/20
to
DrPi <3...@drpi.fr> writes:

> Ada has embedded thread scheduling. What's the usefulness of m2os ?

The Ada language defines the syntax and semantics of tasking, but
whether an implementation supports tasking is a different matter.

If tasking is supported, there will be a runtime system that provides
it, and the compiler knows how to translate the code you write into
calls on that runtime system. If you write 'task T;' the code generated
by GNAT (in a Ravenscar system) will call

procedure Create_Restricted_Task
(Priority : Integer;
Stack_Address : System.Address;
Size : System.Parameters.Size_Type;
Sec_Stack_Address : System.Secondary_Stack.SS_Stack_Ptr;
Secondary_Stack_Size : System.Parameters.Size_Type;
Task_Info : System.Task_Info.Task_Info_Type;
CPU : Integer;
State : Task_Procedure_Access;
Discriminants : System.Address;
Elaborated : Access_Boolean;
Chain : in out Activation_Chain;
Task_Image : String;
Created_Task : Task_Id);

The procedure body is the business of the specific runtime system.

M2OS doesn't support Ada tasking at all. Instead, it provides a set of
primitive operations or API (see the web site); to create a "task", you
provide an initialization procedure and a body procedure and call an API
operation to register the code so that it can be called when necessary.

A higher-level facility can translate your code using (some?) Ada
tasking constructs into equivalent code calling the M2OS API.

The advantage claimed is that the memory footprint is much reduced, so
that the application will fit into a smaller MCU.

Fabien Chouteau

unread,
Jul 28, 2020, 1:00:05 PM7/28/20
to
On Tuesday, July 28, 2020 at 6:48:49 PM UTC+2, Simon Wright wrote:
> A higher-level facility can translate your code using (some?) Ada
> tasking constructs into equivalent code calling the M2OS API.

The do the translation of the code with Libadalang. This is an interesting approach.

DrPi

unread,
Jul 29, 2020, 5:20:27 AM7/29/20
to
Thanks for the explanation.
Difficult to understand how adding a software layer achieves a lower
resources consumption.

DrPi

unread,
Jul 29, 2020, 5:21:39 AM7/29/20
to
Fabien, does AGATHE (https://github.com/Fabien-Chouteau/AGATE) achieves
the same goal ?

Fabien Chouteau

unread,
Jul 29, 2020, 5:28:31 AM7/29/20
to
On Wednesday, July 29, 2020 at 11:21:39 AM UTC+2, DrPi wrote:
> Fabien, does AGATHE (https://github.com/Fabien-Chouteau/AGATE) achieves
> the same goal ?

I have nothing to transform the code in AGATE. I didn't think about it, that's why I say this is interesting :)

DrPi

unread,
Jul 29, 2020, 11:02:56 AM7/29/20
to
I mean, what is the goal of AGATHE ? Replacing Ada threading ?

Fabien Chouteau

unread,
Jul 29, 2020, 11:11:52 AM7/29/20
to
On Wednesday, July 29, 2020 at 5:02:56 PM UTC+2, DrPi wrote:
> I mean, what is the goal of AGATHE ? Replacing Ada threading ?

There is no real goal for AGATE. It was a two days hacking session to see if was still able to write an "RTOS" from scratch.

But indeed, one thing I wanted to see was what the API of an Ada tasking library would look like, as opposed to have the tasking in the run-time. In that sense it is similar to m2os.

Simon Wright

unread,
Jul 29, 2020, 12:53:52 PM7/29/20
to
DrPi <3...@drpi.fr> writes:

> Difficult to understand how adding a software layer achieves a lower
> resources consumption.

If you have no support for Ada tasking, i.e. pragma Restrictions
(No_Tasking), then the compiler will never generate code that calls
Create_Restricted_Task and will reject a compilation with the word
'task' in it.

M2OS is in this respect quite like pthreads.

DrPi

unread,
Jul 29, 2020, 4:41:54 PM7/29/20
to
Ok. Thanks.

DrPi

unread,
Jul 29, 2020, 4:43:00 PM7/29/20
to
Understood. Thanks.

gdo...@gmail.com

unread,
Jul 29, 2020, 8:57:07 PM7/29/20
to
On Friday, July 24, 2020 at 6:11:49 PM UTC-4, gdo...@gmail.com wrote:
> is there a unix like OS written completely in Ada?

i don't quite remember where i read C was not intended
for coding in the million of line, but it has done
some amazing work for something not intended. It
seems that Ada was intended to handle such massively
large coding efforts. i guess though an OS unix in
type maybe done with a 100,000 lines or so. i have wondered
with the language features would using produce a secure
os, with much less need for security updates.

Thanks everyone, it is more than clear Ada can do an OS, from the
realtime OS links.

Shark8

unread,
Jul 31, 2020, 10:01:41 AM7/31/20
to
Part of the issue with security is also on the underlying methodology/design of the system. One good example here is the "text-first" approach of the command-line, especially if you're piping together inputs and outputs: inherent to this is the forced ad hoc re-parsing of of data. This text-first approach is quick because it's easier, at first, but more error-prone; see here: https://www.reddit.com/r/programming/comments/faxlva/i_want_off_mr_golangs_wild_ride/fj2z4zi?utm_source=share&utm_medium=web2x

To contrast, here's an alternative & proper design for command-line:
(1) Have an underlying typesystem the operating-system is aware of.
(2) Have objects in this typesystem with round-trip stable serialize/deseralize functions.
(3) Have the command-line interpreter parse the commands into streams of these objects. [This should be done by a common OS-library.]
(4) Have programs with streams for input and output.

And now you have a system which not only has a consistent interface, but requires less cycles (due to not having to re-parse data), and is more secure.

c+

unread,
Sep 3, 2020, 6:32:06 AM9/3/20
to

sumd...@gmail.com

unread,
Sep 12, 2020, 12:30:52 AM9/12/20
to

erche...@gmail.com

unread,
Sep 19, 2020, 10:09:04 AM9/19/20
to

Olivier Henley

unread,
Sep 23, 2020, 1:39:11 PM9/23/20
to
> Thanks everyone, it is more than clear Ada can do an OS, from the
> realtime OS links.

There has been some interesting development lately:

- https://blog.adacore.com/cubit-a-general-purpose-operating-system-in-spark-ada
- https://github.com/RavSS/HAVK
- https://github.com/ajxs/cxos
- https://archive.fosdem.org/2020/schedule/event/ada_spunky/

Also, you can find a 'probably' complete list here: https://github.com/ohenley/awesome-ada#OS-and-Kernels

Personally, I think:

a) efforts should be organized around a single project,
b) we should find some financing for such a project and
c) a lightweight, but formal design procedure should be put in place.

Else, we will live with non-lasting attempts and a C base linux, forever unstable with new holes every 3 months, and some Rust that will inevitably makes its way in.

I do not have the pocket to finance such a project but I could definitely project myself as thriving to organize and bridge different developers.

If by any chance you know of a billionaire that has better ideas than trying to go to Mars and/or build a bigger yacht than its neighbor... give him my contact. Thx.

DrPi

unread,
Sep 25, 2020, 11:06:49 AM9/25/20
to
Le 23/09/2020 à 19:39, Olivier Henley a écrit :
>> Thanks everyone, it is more than clear Ada can do an OS, from the
>> realtime OS links.
>
> There has been some interesting development lately:
>
> - https://blog.adacore.com/cubit-a-general-purpose-operating-system-in-spark-ada
> - https://github.com/RavSS/HAVK
> - https://github.com/ajxs/cxos
> - https://archive.fosdem.org/2020/schedule/event/ada_spunky/
>
> Also, you can find a 'probably' complete list here: https://github.com/ohenley/awesome-ada#OS-and-Kernels
>
> Personally, I think:
>
> a) efforts should be organized around a single project,
Which kind of OS ?
- General purpose ?
- Real-time ?
- Real-time Posix ?
- For microprocessor ?
- For microcontroller ?

Andreas ZEURCHER

unread,
Sep 25, 2020, 1:31:58 PM9/25/20
to
On Friday, September 25, 2020 at 10:06:49 AM UTC-5, DrPi wrote:
> Le 23/09/2020 à 19:39, Olivier Henley a écrit :
> >> Thanks everyone, it is more than clear Ada can do an OS, from the
> >> realtime OS links.
> >
> > There has been some interesting development lately:
> >
> > - https://blog.adacore.com/cubit-a-general-purpose-operating-system-in-spark-ada
> > - https://github.com/RavSS/HAVK
> > - https://github.com/ajxs/cxos
> > - https://archive.fosdem.org/2020/schedule/event/ada_spunky/
> >
> > Also, you can find a 'probably' complete list here: https://github.com/ohenley/awesome-ada#OS-and-Kernels
> >
> > Personally, I think:
> >
> > a) efforts should be organized around a single project,
> Which kind of OS ?
> - General purpose ?
> - Real-time ?
> - Real-time Posix ?
> - For microprocessor ?
> - For microcontroller ?

It would be real-time POSIX with general-purpose time-sharing during the idle(-from-realtime's-vantage-point) loop. It would be focused primarily on embedded processors (all 32-bit & 64-bit processors), likely not 8-bit and 16-bit microcontrollers (although there do exist some 32-bit processors that are categorized as microcontrollers nowadays). The bigger question would be GPUs & DSPs.

> > b) we should find some financing for such a project and

That is always the trick: how to have a funding and business model that enables a long-term going-concern. Depending on volunteers and donationware quickly exhausts its limited funding.

> > c) a lightweight, but formal design procedure should be put in place.

A guiding committee of like-minded people should be benevolent dictators. Linux has been successful because it had Linus as benevolent dictator.

DrPi

unread,
Sep 26, 2020, 4:50:52 AM9/26/20
to
Le 25/09/2020 à 19:31, Andreas ZEURCHER a écrit :
> On Friday, September 25, 2020 at 10:06:49 AM UTC-5, DrPi wrote:
>> Le 23/09/2020 à 19:39, Olivier Henley a écrit :
>>>> Thanks everyone, it is more than clear Ada can do an OS, from the
>>>> realtime OS links.
>>>
>>> There has been some interesting development lately:
>>>
>>> - https://blog.adacore.com/cubit-a-general-purpose-operating-system-in-spark-ada
>>> - https://github.com/RavSS/HAVK
>>> - https://github.com/ajxs/cxos
>>> - https://archive.fosdem.org/2020/schedule/event/ada_spunky/
>>>
>>> Also, you can find a 'probably' complete list here: https://github.com/ohenley/awesome-ada#OS-and-Kernels
>>>
>>> Personally, I think:
>>>
>>> a) efforts should be organized around a single project,
>> Which kind of OS ?
>> - General purpose ?
>> - Real-time ?
>> - Real-time Posix ?
>> - For microprocessor ?
>> - For microcontroller ?
>
> It would be real-time POSIX with general-purpose time-sharing during the idle(-from-realtime's-vantage-point) loop. It would be focused primarily on embedded processors (all 32-bit & 64-bit processors), likely not 8-bit and 16-bit microcontrollers (although there do exist some 32-bit processors that are categorized as microcontrollers nowadays). The bigger question would be GPUs & DSPs.

If the OS is micro-kernel based, there is no concern about GPUs & DSPs
drivers as they are standard processes (running in user mode).

My "ideal" real-time POSIX OS would be a open source QNX.
Written in Ada, of course ;)

Shark8

unread,
Sep 27, 2020, 10:25:26 AM9/27/20
to
On Wednesday, September 23, 2020 at 11:39:11 AM UTC-6, olivier wrote:
> > Thanks everyone, it is more than clear Ada can do an OS, from the
> > realtime OS links.
> There has been some interesting development lately:
>
> - https://blog.adacore.com/cubit-a-general-purpose-operating-system-in-spark-ada
> - https://github.com/RavSS/HAVK
> - https://github.com/ajxs/cxos
> - https://archive.fosdem.org/2020/schedule/event/ada_spunky/
>
> Also, you can find a 'probably' complete list here: https://github.com/ohenley/awesome-ada#OS-and-Kernels
>
> Personally, I think:
>
> a) efforts should be organized around a single project,
I have a lot of ideas about an operating system, as I've been toying with the idea for two decades, even starting to do an OS in Borland Pascal 7, which made things really nice as the executable was runnable both as a DOS program and booting to the bare-metal. That said, POSIX is a terrible idea, it mandates many things that [IMO] severely constrain the architecture, to include directory layout IIRC, and making another *nix is something that holds zero appeal to me; instead, I would advise having a very solid foundational framework and some "native" constructs that you're not going to find elsewhere.

> b) we should find some financing for such a project and
That's a tall order.

> c) a lightweight, but formal design procedure should be put in place.
True.

> Else, we will live with non-lasting attempts and a C base linux, forever unstable with new holes every 3 months, and some Rust that will inevitably makes its way in.
And there you start hitting on one big issue: C is so ingrained in *nix that "making a Unix in Ada" is begging to fail. There are many Unix-ims that are simply bad design, like the ad-hoc text-processing imposed by the design of piping programs together. (Yes, it does result in security flaws.) — Instead we ought to work "inside out" compared to the Unix mindset: create the foundational types and environment, complete with round-trip text-serialization, and then use that serialization in the command-line... rather than forcing new programs to conform to ad-hoc outputs we can have a system that offers a standard (and type-aware) set of streams for input and output.

It might be wise to break out the Orange Book, too: https://csrc.nist.gov/csrc/media/publications/conference-paper/1998/10/08/proceedings-of-the-21st-nissc-1998/documents/early-cs-papers/dod85.pdf

> I do not have the pocket to finance such a project but I could definitely project myself as thriving to organize and bridge different developers.
I could do some design write-ups, but they definitely aren't Unix-y.

> If by any chance you know of a billionaire that has better ideas than trying to go to Mars and/or build a bigger yacht than its neighbor... give him my contact. Thx.
Yep, there's always the financial aspect.

Dmitry A. Kazakov

unread,
Sep 27, 2020, 11:01:51 AM9/27/20
to
On 27/09/2020 16:25, Shark8 wrote:

> ... POSIX is a terrible idea, it mandates many things that [IMO] severely constrain the architecture, to include directory layout IIRC, and making another *nix is something that holds zero appeal to me; instead, I would advise having a very solid foundational framework and some "native" constructs that you're not going to find elsewhere.

Sure. There is no reason to develop anything resembling UNIX.

An OS worth designing should be based on persistent objects and have no
files and filesystem whatsoever.

On the programming language side, Ada requires a type system with
visibility and privacy potentially done per hardware.

Presently it is not possible to map private parts of a package and types
declared there onto physically different memory pages protected from
reading/writing in public view context. Calls to primitive operations
cannot be routed through the kernel. Tasks and protected objects are not
extensible. Without these OS API would rapidly degrade to low-level
C-esque stuff.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

Luke A. Guest

unread,
Sep 27, 2020, 2:54:10 PM9/27/20
to
On 25/09/2020 18:31, Andreas ZEURCHER wrote:

>>> Personally, I think:
>>>
>>> a) efforts should be organized around a single project,
>> Which kind of OS ?
>> - General purpose ?
>> - Real-time ?
>> - Real-time Posix ?
>> - For microprocessor ?
>> - For microcontroller ?
>
> It would be real-time POSIX with general-purpose time-sharing during the idle(-from-realtime's-vantage-point) loop. It would be focused primarily on embedded processors (all 32-bit & 64-bit processors), likely not 8-bit and 16-bit microcontrollers (although there do exist some 32-bit processors that are categorized as microcontrollers nowadays). The bigger question would be GPUs & DSPs.

I'd aim for something like QNX, a µ-kernel with built in distribution of
tasks over the network.

>>> b) we should find some financing for such a project and
>
> That is always the trick: how to have a funding and business model that enables a long-term going-concern. Depending on volunteers and donationware quickly exhausts its limited funding.

One of the reasons I've not enabled GitHub Sponsors is because I just
don't think people would donate.

Luke A. Guest

unread,
Sep 27, 2020, 2:55:28 PM9/27/20
to
On 26/09/2020 09:50, DrPi wrote:
> If the OS is micro-kernel based, there is no concern about GPUs & DSPs
> drivers as they are standard processes (running in user mode).

Yup.

> My "ideal" real-time POSIX OS would be a open source QNX.
> Written in Ada, of course ;)

Just seen this after my last post. Like minds.

I once had the source to QNX when they opened it.

Luke A. Guest

unread,
Sep 27, 2020, 3:08:04 PM9/27/20
to
In fact, I'd probably go with a combined QNX/AmigaOS inspired µ-kernel,
just not a single address space OS, although one of those using only Ada
for applications would be perfect for smaller boards.

DrPi

unread,
Sep 27, 2020, 4:59:41 PM9/27/20
to
Le 27/09/2020 à 17:01, Dmitry A. Kazakov a écrit :
> On 27/09/2020 16:25, Shark8 wrote:
>
>> ... POSIX is a terrible idea, it mandates many things that [IMO]
>> severely constrain the architecture, to include directory layout IIRC,
>> and making another *nix is something that holds zero appeal to me;
>> instead, I would advise having a very solid foundational framework and
>> some "native" constructs that you're not going to find elsewhere.
>
> Sure. There is no reason to develop anything resembling UNIX.
Good option but...
>
> An OS worth designing should be based on persistent objects and have no
> files and filesystem whatsoever.
...one problem with this concept is you can't compile/run the huge
amount of existing software. You have to recreate everything. Unless you
have a comptibility layer for legacy software.
>
> On the programming language side, Ada requires a type system with
> visibility and privacy potentially done per hardware.
>
> Presently it is not possible to map private parts of a package and types
> declared there onto physically different memory pages protected from
> reading/writing in public view context. Calls to primitive operations
> cannot be routed through the kernel. Tasks and protected objects are not
> extensible. Without these OS API would rapidly degrade to low-level
> C-esque stuff.
>
This will make memory management very very complex.

Dmitry A. Kazakov

unread,
Sep 28, 2020, 3:41:27 AM9/28/20
to
On 27/09/2020 22:59, DrPi wrote:
> Le 27/09/2020 à 17:01, Dmitry A. Kazakov a écrit :

>> An OS worth designing should be based on persistent objects and have
>> no files and filesystem whatsoever.
> ...one problem with this concept is you can't compile/run the huge
> amount of existing software. You have to recreate everything. Unless you
> have a comptibility layer for legacy software.

90% is garbage anyway, virtual machines are for the rest.

BTW, regarding files the job of getting rid of them is basically done
per using streams.

>> On the programming language side, Ada requires a type system with
>> visibility and privacy potentially done per hardware.
>>
>> Presently it is not possible to map private parts of a package and
>> types declared there onto physically different memory pages protected
>> from reading/writing in public view context. Calls to primitive
>> operations cannot be routed through the kernel. Tasks and protected
>> objects are not extensible. Without these OS API would rapidly degrade
>> to low-level C-esque stuff.
>>
> This will make memory management very very complex.

You want it in Ada or you want it in C with Ada syntax?

To me a new OS must have new interface, which is a huge challenge,
because interfaces of "modern" OSes are state of the art of late 70's.

Olivier Henley

unread,
Sep 28, 2020, 9:48:16 AM9/28/20
to
Any good references about that?

Dmitry A. Kazakov

unread,
Sep 28, 2020, 10:48:46 AM9/28/20
to
I never saw anything novel published on OS interfacing since eternity.
This is incredible on itself considering how much happened in the last
half of century: distributed computing, multiple cores, virtualization,
GPUs and vectorized computing, hundreds of generations of GUI, total
overhaul of all hardware I/O interfaces, security and consistency
challenges yet nothing could disturb the serenity of OS API.

Olivier Henley

unread,
Sep 28, 2020, 12:28:10 PM9/28/20
to
If I could read your mind and therefore consult your knowledge I would but Elon Musk's chips are not available yet... ;)

It would be great if you could, at one point, gather those thoughts and document them in some way.
I am sure your encompassing knowledge would make a great design foundation.
Do you think you could make a 'lite' UML diagram of such ideas so as to have the first step toward a concrete context and publish it somewhere?

Personally, I think a Github readme.md page would be ideal at first as people could contribute ideas, complements but still be managed. (pull request, approve, revise etc)

DrPi

unread,
Sep 28, 2020, 1:04:04 PM9/28/20
to
If I remember well, I already read something like this.
But when and where ? No idea.

Olivier Henley

unread,
Sep 28, 2020, 1:30:16 PM9/28/20
to
Landed on that, interesting read: (Modular Performance Analysis and Interface-Based Design for Embedded Real-Time Systems)
https://tik-old.ee.ethz.ch/file/2b9491dbe13085961038e4a53f7792e6/diss_wandeler.pdf

Paul Rubin

unread,
Sep 28, 2020, 1:47:26 PM9/28/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
> I never saw anything novel published on OS interfacing since eternity.

Do you remember the Microsoft Singularity research OS? Does it count?

Olivier Henley

unread,
Sep 28, 2020, 2:05:38 PM9/28/20
to

> Landed on that, interesting read: (Modular Performance Analysis and Interface-Based Design for Embedded Real-Time Systems)
> https://tik-old.ee.ethz.ch/file/2b9491dbe13085961038e4a53f7792e6/diss_wandeler.pdf

If I get this right, the idea is to have a stream of 'described compute task units' and a stream of 'described compute availability/capability'. Therefore the system becomes 'dynamically adaptable' (you plug in a new faster CPU and boom, compute units burden start to be processed at the new rate) and time decidable (because all capabilities can de derived for a compute unit type for every specific hardware compute device, we can compute the possible deadline)?

Would it need some modifications at the hardware level... eg. to have proper information about device compute capabilities?

I am not sure we are talking about the same thing, but the paper is about what I summarize and is, again, a very interesting read.

Vincent Marciante

unread,
Sep 28, 2020, 2:40:02 PM9/28/20
to
> I never saw anything novel published on OS interfacing since eternity.

GNU HURD NG
https://www.gnu.org/software/hurd/hurd/ng.html

Shark8

unread,
Sep 28, 2020, 3:36:26 PM9/28/20
to
Ok, here’s some “modern” features from old Oses:
1. https://en.wikipedia.org/wiki/Burroughs_MCP (1960s; appeared 1961)
a. Written exclusively in a high-level language (Algol), no assembler.
b. Typed, Journaling file-system.
c. Code can only be generated via trusted compilers.
d. Burroughs libraries:
i. Completely control access to shared resources,
ii. Allowed safe data-access w/o process switching,
iii. Offer procedural entry-points to the client which are checked for a compatible interface before the client is linked to the library (like Ada’s overloading resolution + type-checking),
iv. Have multiple sharing modes:
(1) ‘shared by rununit’ — designed for COBOL, where a rununit is “an original initiating client plus the libraries it has linked to” and each rununit gets a library instance,
(2) ‘shared by all’ — all clients share the same instance,
(3) ‘private’ — each client gets a separate instance of the library.
v.
2. https://en.wikipedia.org/wiki/Multics (1970s; appeared 1969)
a. “single-level store for data” — meaning there was no distinction between ‘disk’ and ‘memory’ — mapped all data into the address-space. In POSIX-terms this is similar to every single ‘file’ being mmap’ed.
b. CPUs, memory, and disks could be added/removed while the system was on-line due to extremely aggressive on-line reconfiguration support.
c. Designed to be a secure operating system, with more failures at the outset than they would have liked, by 1985 the OS reached th B2 level in the Orange Book (Trusted Computer System Evaluation Criteria; mentioned upthread).
3. https://en.wikipedia.org/wiki/OpenVMS (1970s; appeared 1977)
a. Common Language Environment, a standardized mechanism for interoperability between different programming languages. (Similar-ish to DOTNET’s CLR.)
b. Integrated database.
c. Easy clustering.
d. DCL: An extensible command language.
e. DECnet: An OSI (7–layer) networking.
4. https://en.wikipedia.org/wiki/Rational_R1000 (1985)
a. Combined OS, IDE, and Ada compiler into a single unit; see: http://www.somethinkodd.com/oddthinking/2006/01/07/rational-1000-a-surprising-architecture-from-a-surprising-source/
i. “Then they brought out a big whisk and mixed all the layers in together. The IDE was the operating system. The operating system was the Ada compiler. If you opened a command window to write a quick batch job, you wrote the batch job in Ada, using the IDE!”
b. Configuration management
c. Version control
d. Interactive design rule-checking and semantic-analysis
e. Source level debugging
f. Persistent memory/objects (?)

Those are just four old, and non-unix, operating systems which have relatively ‘modern’ features just being incorporated into newer OSes (and IDEs in the case of the R-1000). There are also many alternative modes of thought as to the architectures for computers: from proposed ‘Database-machines’ to ‘Dataflow-machines’ of the ‘70s & ‘80s, to the modern massively parallel machines like the GA–144, to memristor-based neural machines.

So, what would we want for an Ada OS? (Disregarding, for a moment, the base hardware.)

1. Persistent objects (as per Dmitry)
2. Content Addressable Memory-Architecture (?)
a. Pro:
i. Makes the natural access an ‘object’ rather than a base ‘address’.
ii. Actively breaks the idea that the system should be compatible with C.
b. Con:
i. Makes Ada 'Access a bit more odd.
ii. Makes most memory tricks invalid.
3. Integrated Database
a. Pro:
i. Makes searching, collating, and certain manipulations easier.
ii. Could be integrated with analytic units.
iii. Provides a universal, common interface to persistency… at least across the OS.
b. Con:
i. The sort of database impacts how familiar or useful the system is.
(1) Relational, most familiar to DBAs;
(2) Hierarchical, could be used to easily implement Version-Control, Continuous-Integration, and possibly directory-like navigation;
(3) Document, excellent for researchers and librarians;
(4) Graph, very general and could likely be able to replicate any of the above models… the problem here being having a good “model-map”.
ii. “What about vendor lock-in!!”
4. An integrated SMT prover?
a. Pro:
i. accessible to databases (for searching and filtering),
ii. accessible to compiler (perhaps for validating),
iii. Symbolic-execution (for analyzing programs & data),
iv. The ability for SPARK-style proving to be uniform on the platform regardless of client-language. (See item 7.)
b. Con:
i. More work,
ii. Would require some thought on the design.
5. A SOM/DSOM-like based OS-level type-system
a. Extended with the base meta-object having ASN.1 serialize/deserialize methods.
b. Incorporated into an OpenVMS Common Language Environment-like system, thus making it available for all supported programming languages.
c. Having types for:
i. Universal Integers (?);
ii. Universal Floats (?);
iii. Universal Fixed-points (?);
iv. Email-address, preventing idiotic definitions like “a string with the ‘@’ symbol inside” that are apt to occur with regex;
v. phone numbers, the same but with phone-numbers;
vi. ISBN, useful for researchers/reference;
vii. DOI, useful for researchers/reference;
viii. WCS, useful for location;
ix. Time and Date;
x. If possible, more IR-ish constructs like:
(1) TASK — possibly allowing us to serialize/deserialize across a cluster’s machines,
(2) PACKAGE — allowing us to have a ‘native’ module,
(3) GENERIC — allowing us to paramaterize compile-time,
(4) subprograms — possibly allowing us to distribute subprograms, and
(5) parameters — if we generalize to an “abstract parameter” or “parameter-interface”, we could use the same construct for compile-time [ie generics] and run-time.
xi. If item x is implemented into the system then perhaps an interface for compilers, this could allow us to hook into the SMT prover and automatically get trusted compilers like 1.c.
6. OSI-style networking
a. Pro:
i. This could allow client-programs to be essentially independent of network protocols in source-code.
b. Con:
i. This would throw off a lot of people that are used to depending on things like Connect( “http:some.site.org”, 1010 ); likely won’t enjoy this;
ii. Some method of indicating generalized connectivity might need designed.
iii. Using a pipe, configuration, or some other interface to set the appropriate parameters might be considered insecure.
7. Ground up proving and verification via SPARK, where possible.

I have some more ideas in my notes, one of which would be simultaneous development on multiple architectures like, say, SPARC, x86, Motorola 68k, and a virtual trinary-machine (this way shift-right and -left are multiples of 3 rather than 2) — this would be useful for eliminating the inherent reliance on low-level implementation details outside of implementation-dependent packages, and would likely help reduce the scope of such packages.

Dmitry A. Kazakov

unread,
Sep 28, 2020, 4:27:53 PM9/28/20
to
What is novel in there? The idea to reduce OS down to nothing is neither
new nor productive, IMO.

Dmitry A. Kazakov

unread,
Sep 28, 2020, 4:28:51 PM9/28/20
to
Well, sort of. Though it is obviously far too heavy-weight, then dynamic
typing + GC are non-starters. Arguably it is not even an OS.

Dmitry A. Kazakov

unread,
Sep 28, 2020, 4:30:25 PM9/28/20
to
On 28/09/2020 18:28, Olivier Henley wrote:
> On Monday, September 28, 2020 at 10:48:46 AM UTC-4, Dmitry A. Kazakov wrote:
>> On 28/09/2020 15:48, Olivier Henley wrote:
>>> On Monday, September 28, 2020 at 3:41:27 AM UTC-4, Dmitry A. Kazakov wrote:
>>>> Le 27/09/2020 à 17:01, Dmitry A. Kazakov a écrit :
>>>> To me a new OS must have new interface, which is a huge challenge,
>>>> because interfaces of "modern" OSes are state of the art of late 70's.
>>>
>>> Any good references about that?
>> I never saw anything novel published on OS interfacing since eternity.
>> This is incredible on itself considering how much happened in the last
>> half of century: distributed computing, multiple cores, virtualization,
>> GPUs and vectorized computing, hundreds of generations of GUI, total
>> overhaul of all hardware I/O interfaces, security and consistency
>> challenges yet nothing could disturb the serenity of OS API.
>
> If I could read your mind and therefore consult your knowledge I would but Elon Musk's chips are not available yet... ;)

I can't read it myself! (:-))

> It would be great if you could, at one point, gather those thoughts and document them in some way.
> I am sure your encompassing knowledge would make a great design foundation.
> Do you think you could make a 'lite' UML diagram of such ideas so as to have the first step toward a concrete context and publish it somewhere?

Before concrete designs I see a lot of problems with the language itself
(not necessarily Ada). An efficient memory-mapped architecture of OS
objects requires a whole new language paradigm, preventing circumvention
of visibility rules. OS objects are inherently MI and MD, including
active object (tasks in Ada) and synchronization object (protected in
Ada). Consistency checks require packages versioning and so on and so forth.

There are reasons why low-level C rules the API. Higher-level interfaces
are incredibly fragile. I believe SPARK is the way, but I have no idea
where to start.

> Personally, I think a Github readme.md page would be ideal at first as people could contribute ideas, complements but still be managed. (pull request, approve, revise etc)

The problem is that all such projects tend to die due to lack of interest.

Shark8

unread,
Sep 28, 2020, 5:06:10 PM9/28/20
to
On Monday, September 28, 2020 at 2:30:25 PM UTC-6, Dmitry A. Kazakov wrote:
>
> There are reasons why low-level C rules the API. Higher-level interfaces
> are incredibly fragile. I believe SPARK is the way, but I have no idea
> where to start.

I'd imagine with a system like this: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.26.2533&rep=rep1&type=pdf
but integrating passing SPARK-proofs into the upward-merge operation.

> > Personally, I think a Github readme.md page would be ideal at first as people could contribute ideas, complements but still be managed. (pull request, approve, revise etc)
> The problem is that all such projects tend to die due to lack of interest.

I could do this.

Paul Rubin

unread,
Sep 29, 2020, 7:54:19 PM9/29/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>> Do you remember the Microsoft Singularity research OS? Does it count?
> What is novel in there? The idea to reduce OS down to nothing is
> neither new nor productive, IMO.

One interesting thing was that it got rid of the need for address
translation hardware by implementing interprocess memory protection with
compile-time static analysis. That made IPC a lot faster and changed
the trade-offs for different concurrency approaches.

Dmitry A. Kazakov

unread,
Sep 30, 2020, 4:19:02 AM9/30/20
to
OK, but that again is rather retrograde, MS-DOS pops into my mind (:-)).

Paul Rubin

unread,
Sep 30, 2020, 1:28:00 PM9/30/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
> OK, but that again is rather retrograde, MS-DOS pops into my mind (:-)).

MSDOS had no memory protection at all, and was basically single tasking.
Singularity had the limitation that you were only allowed to use trusted
compilers, but in exchange it gave an interesting approach to
programming high performance multiprocessor systems.

Dmitry A. Kazakov

unread,
Sep 30, 2020, 3:42:25 PM9/30/20
to
Put a trusted compiler into MS-DOS, where is a difference? Tasking would
be up to the compiler's run-time, obviously.

I want an OS protecting from compilers I do not trust without
performance loss. Static checks must be enforced at run-time.

Then a fundamental hurdle is that you never could ensure balanced
resource sharing and QoS through static checks. Thus in the end, it is
all MS-DOS again (a so-called monitor, rather than true OS).

Paul Rubin

unread,
Sep 30, 2020, 4:33:34 PM9/30/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
> I want an OS protecting from compilers I do not trust without
> performance loss.

That is not possible. Traditional OS's use memory protection hardware
for that, but that hardware introduces performance loss. Singularity
aimed to avoid the loss by relying on trusted compilers. It's not
anything like MSDOS, since MSDOS never tried to coordinate multiple
processors and intercommunicating processes.

Dmitry A. Kazakov

unread,
Sep 30, 2020, 5:03:34 PM9/30/20
to
On 30/09/2020 22:33, Paul Rubin wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>> I want an OS protecting from compilers I do not trust without
>> performance loss.
>
> That is not possible. Traditional OS's use memory protection hardware
> for that, but that hardware introduces performance loss.

I do not see why loss must be any bigger that other means of
synchronization.

> Singularity
> aimed to avoid the loss by relying on trusted compilers.

1. You still have to share memory and use global resources for
interlocking etc.

2. I would argue that hardware-assisted virtualization and insulation of
processes produces more lean and effective code than cooperative model.

Lack of virtualization leads to abstraction inversion and all sorts of
low-level programming tricks that make code very ugly, fragile and quite
inefficient.

E.g. in Ada absence of co-routines leads to using state machines which
turns everything upside down and inefficient too.

> It's not
> anything like MSDOS, since MSDOS never tried to coordinate multiple
> processors and intercommunicating processes.

Both deploy a cooperative model of sharing resources.

P.S. Surely MS-DOS coordinated auxiliary processors, there exited lots
of expansion cards with processors on them in MS-DOS times.

P.P.S. In MS-DOS processes were coordinated using glorious INT 21h. (:-))

Randy Brukardt

unread,
Sep 30, 2020, 6:42:10 PM9/30/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:rl2rqt$jdq$1...@gioia.aioe.org...
> On 30/09/2020 22:33, Paul Rubin wrote:
>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>>> I want an OS protecting from compilers I do not trust without
>>> performance loss.
>>
>> That is not possible. Traditional OS's use memory protection hardware
>> for that, but that hardware introduces performance loss.
>
> I do not see why loss must be any bigger that other means of
> synchronization.

Synchronization is rather expensive, since it causes issues with prefetch
and caches and pretty much everything else that improves performance.

...
> P.S. Surely MS-DOS coordinated auxiliary processors, there exited lots of
> expansion cards with processors on them in MS-DOS times.

I suppose, that was done with device drivers and the like, below anything
visible. I don't remember every worrying about what was happening on cards,
anymore than one does nowdays on Windows or Linux.

> P.P.S. In MS-DOS processes were coordinated using glorious INT 21h. (:-))

Since there wasn't any extra processing (no cores back then), anything
working like a process was a hack. Janus/Ada used (and still uses)
cooperative multitasking to give the appearance of multiple processes, but
no such thing was actually happening. Given that MS-DOS itself wasn't
re-enterant, it was too risky to use any sort of conventional
interrupt-driven tasking. (Some people did it anyway, by trusting various
undocumented hacks; there even was a famous book about those - which came
out way too late to influnce the Janus/Ada design.)

Randy.


Paul Rubin

unread,
Oct 1, 2020, 3:57:54 AM10/1/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
> I do not see why loss must be any bigger that other means of
> synchronization.

The address translation hardware unavoidably introduces delays,
especially when there are TLB cache misses.

> 1. You still have to share memory and use global resources for
> interlocking etc.

You have that either way.

> 2. I would argue that hardware-assisted virtualization and insulation
> of processes produces more lean and effective code than cooperative
> model.

It's still preemptive driven by hardware interrupts like any other OS.

> Lack of virtualization leads to abstraction inversion and all sorts of
> low-level programming tricks that make code very ugly, fragile and
> quite inefficient.

The OS and compiler take care of that.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 5:26:16 AM10/1/20
to
On 01/10/2020 09:57, Paul Rubin wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>> I do not see why loss must be any bigger that other means of
>> synchronization.
>
> The address translation hardware unavoidably introduces delays,
> especially when there are TLB cache misses.

1. So does deploying system-wide spin locks etc.

2. Then I don't see how are you going to go without address translation.
Are you going to recompile and re-link everything in absolute addresses
every time anything changes and then reboot?

>> 1. You still have to share memory and use global resources for
>> interlocking etc.
>
> You have that either way.

That is the point.

>> 2. I would argue that hardware-assisted virtualization and insulation
>> of processes produces more lean and effective code than cooperative
>> model.
>
> It's still preemptive driven by hardware interrupts like any other OS.

So, where is performance gain? You still need storing/restoring
registers and other context's data upon preemting.

>> Lack of virtualization leads to abstraction inversion and all sorts of
>> low-level programming tricks that make code very ugly, fragile and
>> quite inefficient.
>
> The OS and compiler take care of that.

Neither can do anything about it. You either have abstraction, like flat
contiguous address space, however implemented, or you do not. The
penalty is always there. You can have some relief from the hardware or none.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 5:28:13 AM10/1/20
to
On 01/10/2020 00:42, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message

>> P.S. Surely MS-DOS coordinated auxiliary processors, there exited lots of
>> expansion cards with processors on them in MS-DOS times.
>
> I suppose, that was done with device drivers and the like, below anything
> visible. I don't remember every worrying about what was happening on cards,
> anymore than one does nowdays on Windows or Linux.

It was so simpler then. Usually a dual-ported RAM was used for
communication. Today's hardware interfaces are faster but incredibly
complicated.

>> P.P.S. In MS-DOS processes were coordinated using glorious INT 21h. (:-))
>
> Since there wasn't any extra processing (no cores back then), anything
> working like a process was a hack. Janus/Ada used (and still uses)
> cooperative multitasking to give the appearance of multiple processes, but
> no such thing was actually happening. Given that MS-DOS itself wasn't
> re-enterant, it was too risky to use any sort of conventional
> interrupt-driven tasking. (Some people did it anyway, by trusting various
> undocumented hacks; there even was a famous book about those - which came
> out way too late to influnce the Janus/Ada design.)

Yes, Ada tasking intermixed with I/O was a nightmare. It is so much
better now, that the only place left where you do need to care about
that stuff is during a protected action.

BTW, I still do not know to design an Ada-conform tracing/logging
facility such that you could trace/log from anywhere, protected action
included, and without knowing statically which protected object is involved.

Paul Rubin

unread,
Oct 1, 2020, 5:46:53 AM10/1/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>> You have that either way.
> That is the point.

With a conventional OS you have those synchronization delays AND
you have address translation delays. With Singularity you still
have the first, but you get rid of the second.

> So, where is performance gain? You still need storing/restoring
> registers and other context's data upon preemting.

You run millions of instructions between preemptions, but you take the
address translation delay on EVERY memory access.

> Are you going to recompile and re-link everything in
> absolute addresses every time anything changes and then reboot?

Position independent code is a thing.

> Neither can do anything about it. You either have abstraction, like
> flat contiguous address space, however implemented, or you do not. The
> penalty is always there. You can have some relief from the hardware or
> none.

Shrug, maybe there is some kind of block allocator like in the old days.
The original purpose of virtual memory was to allow simulating big ram
by paging to disk. Nobody cares about that any more.

That said, I don't know much about Singularity (never used it, haven't
read the papers) so maybe I'm missing something important. But the guys
who told me about it were knowledgeable and they were impressed by it.

Anyway, this tangent started from the claim that nothing different had
been done in OS's in a while. I don't claim Singularity is great, only
that it's different.

J-P. Rosen

unread,
Oct 1, 2020, 5:59:58 AM10/1/20
to
Le 01/10/2020 à 11:28, Dmitry A. Kazakov a écrit :
> BTW, I still do not know to design an Ada-conform tracing/logging
> facility such that you could trace/log from anywhere, protected action
> included, and without knowing statically which protected object is
> involved.
>
Did you have a look at package Debug?
(https://www.adalog.fr/en/components#Debug)

It features, among others, a trace routine which is guaranteed to not be
potentially blocking.

--
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr

Dmitry A. Kazakov

unread,
Oct 1, 2020, 6:21:49 AM10/1/20
to
On 01/10/2020 11:59, J-P. Rosen wrote:
> Le 01/10/2020 à 11:28, Dmitry A. Kazakov a écrit :
>> BTW, I still do not know to design an Ada-conform tracing/logging
>> facility such that you could trace/log from anywhere, protected action
>> included, and without knowing statically which protected object is
>> involved.
>>
> Did you have a look at package Debug?
> (https://www.adalog.fr/en/components#Debug)

Thanks

> It features, among others, a trace routine which is guaranteed to not be
> potentially blocking.

It calls a protected operation on a different protected object, yes,
this is non-blocking, and I considered same, but is this legal? Maybe I
am wrong, but I have an impression that walking away to another object
is not OK. Or is that limited to protected entries only?

Another issue is having two different calls: Trace and protected Trace.
If one used instead of another, you have a ticking bomb in the
production code. I remember that there was a GNAT pragma to catch it,
but it was a run-time check, so it just replaced one type of explosive
with another.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 6:35:12 AM10/1/20
to
On 01/10/2020 11:46, Paul Rubin wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>>> You have that either way.
>> That is the point.
>
> With a conventional OS you have those synchronization delays AND
> you have address translation delays. With Singularity you still
> have the first, but you get rid of the second.

And the cache is gone.

>> So, where is performance gain? You still need storing/restoring
>> registers and other context's data upon preemting.
>
> You run millions of instructions between preemptions, but you take the
> address translation delay on EVERY memory access.

Like when the program counter register is used. (:-))

>> Are you going to recompile and re-link everything in
>> absolute addresses every time anything changes and then reboot?
>
> Position independent code is a thing.

Either you do it statically, maybe postponed until running the loader,
and then the code is not really position-independent, or you do it
dynamically and then it is hardware, or no hardware.

Niklas Holsti

unread,
Oct 1, 2020, 7:38:32 AM10/1/20
to
On 2020-10-01 13:21, Dmitry A. Kazakov wrote:
> On 01/10/2020 11:59, J-P. Rosen wrote:
>> Le 01/10/2020 à 11:28, Dmitry A. Kazakov a écrit :
>>> BTW, I still do not know to design an Ada-conform tracing/logging
>>> facility such that you could trace/log from anywhere, protected action
>>> included, and without knowing statically which protected object is
>>> involved.
>>>
>> Did you have a look at package Debug?
>> (https://www.adalog.fr/en/components#Debug)
>
> Thanks
>
>> It features, among others, a trace routine which is guaranteed to not be
>> potentially blocking.
>
> It calls a protected operation on a different protected object, yes,
> this is non-blocking, and I considered same, but is this legal?


Yes.

If the program is using ceiling-priority-based protection, the priority
of the calling object must be less or equal to the priority of the
called object.


> Or is that limited to protected entries only?


An entry call is potentially blocking and therefore not allowed in a
protected operation.


> Another issue is having two different calls: Trace and protected Trace.
> If one used instead of another, you have a ticking bomb in the
> production code.


I assume that is a "feature" of the referenced Debug package, not of the
basic method it uses to implement a logging facility.

I haven't looked at the Debug package, but I would have suggested a
logging facility that consists of:

1. A LIFO queue of log entries implemented in a protected object of
highest priority. The object has a procedure "Write_Log_Entry".

2. A task that empties the LIFO queue into a log file. The task calls an
entry of the LIFO protected object to get a log entry from the queue,
but executes the file-writing operations in task context, not in a
protected operation.

J-P. Rosen

unread,
Oct 1, 2020, 7:48:25 AM10/1/20
to
Le 01/10/2020 à 12:21, Dmitry A. Kazakov a écrit :
>> It features, among others, a trace routine which is guaranteed to not be
>> potentially blocking.
>
> It calls a protected operation on a different protected object, yes,
> this is non-blocking, and I considered same, but is this legal? Maybe I
> am wrong, but I have an impression that walking away to another object
> is not OK. Or is that limited to protected entries only?
A protected operation is not allowed to call a "potentially blocking
operation", whose list is given in 9.5(30..33). And a protected
subprogram is not on that list (except for the case of an external call
on the same object, not the case here).

>
> Another issue is having two different calls: Trace and protected Trace.
> If one used instead of another, you have a ticking bomb in the
> production code. I remember that there was a GNAT pragma to catch it,
> but it was a run-time check, so it just replaced one type of explosive
> with another.
Well, just use AdaControl with the rule:
check Potentially_Blocking_Operations;
;-)

Niklas Holsti

unread,
Oct 1, 2020, 7:52:39 AM10/1/20
to
On 2020-10-01 14:38, Niklas Holsti wrote:

Whoops, in the below I should have said "FIFO" instead of "LIFO". Brain
fart.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 8:51:57 AM10/1/20
to
On 01/10/2020 13:38, Niklas Holsti wrote:
> On 2020-10-01 13:21, Dmitry A. Kazakov wrote:
>> On 01/10/2020 11:59, J-P. Rosen wrote:
>>> Le 01/10/2020 à 11:28, Dmitry A. Kazakov a écrit :
>>>> BTW, I still do not know to design an Ada-conform tracing/logging
>>>> facility such that you could trace/log from anywhere, protected action
>>>> included, and without knowing statically which protected object is
>>>> involved.
>>>>
>>> Did you have a look at package Debug?
>>> (https://www.adalog.fr/en/components#Debug)
>>
>> Thanks
>>
>>> It features, among others, a trace routine which is guaranteed to not be
>>> potentially blocking.
>>
>> It calls a protected operation on a different protected object, yes,
>> this is non-blocking, and I considered same, but is this legal?
>
> Yes.
>
> If the program is using ceiling-priority-based protection, the priority
> of the calling object must be less or equal to the priority of the
> called object.

My mental picture was protected procedure calls executed concurrently on
different cores of a multi-core processor. Would that sort of
implementation be legal?

If so, then let there be protected procedure P1 of the object O1 and P2
of O2. If P1 and P2 call to P3 of O3 that would be a problem. Ergo ether
wandering or concurrent protected protected calls must be illegal.

> 1. A LIFO queue of log entries implemented in a protected object of
> highest priority. The object has a procedure "Write_Log_Entry".

Yes, that was what I thought and what Debug.adb does. However Debug.adb
allocates the body of the FIFO element in the pool. I would rather use
my implementation of indefinite FIFO which does not use pools. I don't
want allocators/deallocators inside protected stuff.

> 2. A task that empties the LIFO queue into a log file. The task calls an
> entry of the LIFO protected object to get a log entry from the queue,
> but executes the file-writing operations in task context, not in a
> protected operation.

A simpler approach is to flush the queue by the first call to an
unprotected variant of Trace. I believe Debug.adb does just this.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 8:54:43 AM10/1/20
to
On 01/10/2020 13:48, J-P. Rosen wrote:

> Well, just use AdaControl with the rule:
> check Potentially_Blocking_Operations;
> ;-)

Oh, that is great. I will certainly do. Thanks for the hint!

J-P. Rosen

unread,
Oct 1, 2020, 10:18:59 AM10/1/20
to
Le 01/10/2020 à 14:51, Dmitry A. Kazakov a écrit :

> My mental picture was protected procedure calls executed concurrently on
> different cores of a multi-core processor. Would that sort of
> implementation be legal?
No. Protected objects guarantee that only one task at a time can be
inside (ignoring functions). Multi-cores don't come into play.

> If so, then let there be protected procedure P1 of the object O1 and P2
> of O2. If P1 and P2 call to P3 of O3 that would be a problem. Ergo ether
> wandering or concurrent protected protected calls must be illegal.
But it's not the case...

>> 1. A LIFO queue of log entries implemented in a protected object of
>> highest priority. The object has a procedure "Write_Log_Entry".
>
> Yes, that was what I thought and what Debug.adb does. However Debug.adb
> allocates the body of the FIFO element in the pool. I would rather use
> my implementation of indefinite FIFO which does not use pools. I don't
> want allocators/deallocators inside protected stuff.
As surprising as it may seem, allocators/deallocators are NOT
potentially blocking operations. But I understand your concerns...

>> 2. A task that empties the LIFO queue into a log file. The task calls
>> an entry of the LIFO protected object to get a log entry from the
>> queue, but executes the file-writing operations in task context, not
>> in a protected operation.
>
> A simpler approach is to flush the queue by the first call to an
> unprotected variant of Trace. I believe Debug.adb does just this.

Yes. Moreover, there is a Finalize of a controlled object to make sure
that no trace is lost if the program terminates without calling any
(unprotected) Trace.

Niklas Holsti

unread,
Oct 1, 2020, 11:38:15 AM10/1/20
to
On 2020-10-01 15:51, Dmitry A. Kazakov wrote:
> On 01/10/2020 13:38, Niklas Holsti wrote:
>> On 2020-10-01 13:21, Dmitry A. Kazakov wrote:
>>> On 01/10/2020 11:59, J-P. Rosen wrote:
>>>> Le 01/10/2020 à 11:28, Dmitry A. Kazakov a écrit :
>>>>> BTW, I still do not know to design an Ada-conform tracing/logging
>>>>> facility such that you could trace/log from anywhere, protected action
>>>>> included, and without knowing statically which protected object is
>>>>> involved.
>>>>>
>>>> Did you have a look at package Debug?
>>>> (https://www.adalog.fr/en/components#Debug)
>>>
>>> Thanks
>>>
>>>> It features, among others, a trace routine which is guaranteed to
>>>> not be
>>>> potentially blocking.
>>>
>>> It calls a protected operation on a different protected object, yes,
>>> this is non-blocking, and I considered same, but is this legal?
>>
>> Yes.
>>
>> If the program is using ceiling-priority-based protection, the
>> priority of the calling object must be less or equal to the priority
>> of the called object.
>
> My mental picture was protected procedure calls executed concurrently on
> different cores of a multi-core processor. Would that sort of
> implementation be legal?


If the protected procedures belong to different protected objects, yes
it is legal. But not if they belong to the same object, as J-P noted.

Note that the ordinary form of the ceiling-priority-locking method does
not work for multi-cores, because a task executing at the ceiling
priority of a protected object does not prevent the parallel execution
of other tasks (on other cores) at the same or lower priority.


>> 2. A task that empties the [FIFO] queue into a log file. The task calls
>> an entry of the [FIFO] protected object to get a log entry from the
>> queue, but executes the file-writing operations in task context, not
>> in a protected operation.
>
> A simpler approach is to flush the queue by the first call to an
> unprotected variant of Trace. I believe Debug.adb does just this.


That is ok for programs that run for a short while, then terminate. Most
of my programs are non-terminating embedded programs so the log has to
be emitted from RAM to some larger storage continuously as the program
is running.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 11:44:54 AM10/1/20
to
On 01/10/2020 16:18, J-P. Rosen wrote:
> Le 01/10/2020 à 14:51, Dmitry A. Kazakov a écrit :
>
>> My mental picture was protected procedure calls executed concurrently on
>> different cores of a multi-core processor. Would that sort of
>> implementation be legal?
> No. Protected objects guarantee that only one task at a time can be
> inside (ignoring functions). Multi-cores don't come into play.

Inside one protected object, or inside any protected object? Is it
effectively one single protected object and all protected object are its
facets.

>> If so, then let there be protected procedure P1 of the object O1 and P2
>> of O2. If P1 and P2 call to P3 of O3 that would be a problem. Ergo ether
>> wandering or concurrent protected protected calls must be illegal.
> But it's not the case...

The scenario is: Task1 calls P1 on O1. Task2 calls P2 on O2. Both P1 and
P2 call Protected_Trace on the protected Tracer object. If tasks occupy
two different cores, would/should one suspend (in order not use the
reserved word "block") another?

>>> 1. A LIFO queue of log entries implemented in a protected object of
>>> highest priority. The object has a procedure "Write_Log_Entry".
>>
>> Yes, that was what I thought and what Debug.adb does. However Debug.adb
>> allocates the body of the FIFO element in the pool. I would rather use
>> my implementation of indefinite FIFO which does not use pools. I don't
>> want allocators/deallocators inside protected stuff.
> As surprising as it may seem, allocators/deallocators are NOT
> potentially blocking operations. But I understand your concerns...

And this raises the same question. Pool must be interlocked, but not
block. What is the semantics of this "non-blocking" interlocking on a
multi-core machine?

Dmitry A. Kazakov

unread,
Oct 1, 2020, 12:06:50 PM10/1/20
to
On 01/10/2020 17:38, Niklas Holsti wrote:

> If the protected procedures belong to different protected objects, yes
> it is legal. But not if they belong to the same object, as J-P noted.

But then you have a problem when two independently running protected
procedures of *different* objects call a procedure of a third object.
You must serialize these calls, and that is effectively blocking.

>> A simpler approach is to flush the queue by the first call to an
>> unprotected variant of Trace. I believe Debug.adb does just this.
>
> That is ok for programs that run for a short while, then terminate. Most
> of my programs are non-terminating embedded programs so the log has to
> be emitted from RAM to some larger storage continuously as the program
> is running.

In my case the embedded program keeps on tracing all the time. Tracing
from a protected action is rather an exception, sometimes literally,
e.g. from a GNAT's exception handler.

Niklas Holsti

unread,
Oct 1, 2020, 1:01:48 PM10/1/20
to
On 2020-10-01 19:06, Dmitry A. Kazakov wrote:
> On 01/10/2020 17:38, Niklas Holsti wrote:
>
>> If the protected procedures belong to different protected objects, yes
>> it is legal. But not if they belong to the same object, as J-P noted.
>
> But then you have a problem when two independently running protected
> procedures of *different* objects call a procedure of a third object.
> You must serialize these calls, and that is effectively blocking.


I don't know what you mean by "effectively", here, but yes, one of the
tasks must wait for the other task to complete the protected operation
on the third object. So what? This is the idea of a protected object: if
two tasks want to operate on the same object at the same time, the tasks
must be serialized, and one task must wait for the other. This applies
even if the task that has to wait is already in the middle of another
protected operation on another object.

This is analogous to the fact that any task may have to wait for
higher-priority (possibly pre-empting) tasks to finish their current
work, and in fact the ceiling-priority-locking method uses this fact to
implement the protection of protected objects. Even a task that is in
the middle of a protected operation can be pre-empted by a
higher-priority task and so may have to wait, as is normal in priority
scheduling.

While the usual advice is that protected operations should be "short and
quick", this is as misleading as claiming that a real-time program must
react "quickly" to inputs. In a given real-time program, the actual
upper limit on the duration of a protected operation is set only by the
effect of that operation's execution on the overall schedulability of
that program. At low priorities, where deadlines usually are relatively
long, it is quite ok to have protected operations that take quite a
while to execute, as long as the overall response times of the affected
low-priority tasks remains shorter than their deadlines. Of course the
same principle applies at all priority levels.

Dmitry A. Kazakov

unread,
Oct 1, 2020, 1:37:03 PM10/1/20
to
On 01/10/2020 19:01, Niklas Holsti wrote:
> On 2020-10-01 19:06, Dmitry A. Kazakov wrote:
>> On 01/10/2020 17:38, Niklas Holsti wrote:
>>
>>> If the protected procedures belong to different protected objects,
>>> yes it is legal. But not if they belong to the same object, as J-P
>>> noted.
>>
>> But then you have a problem when two independently running protected
>> procedures of *different* objects call a procedure of a third object.
>> You must serialize these calls, and that is effectively blocking.
>
>
> I don't know what you mean by "effectively", here, but yes, one of the
> tasks must wait for the other task to complete the protected operation
> on the third object. So what?

It looks quite complicated to implement, e.g. checking barriers and
doing locking when you are already done that.

Now let's continue the example. What happens when the calling paths are:

O1.P1 --> O3.P3 --> O2.Q

O2.P2 --> O3.P3 --> O2.Q

Let Q1.P1 blocks Q2.P2 on a attempt to enter O3.P3:

O1.P1 --> O3.P3

O2.P2 --> blocked

Then O3.P3 calls O2.Q:

O1.P1 --> O3.P3 --> O2.Q
|
O2.P2 --> blocked V

This will either re-enter O2 or deadlock.

DrPi

unread,
Oct 1, 2020, 3:02:23 PM10/1/20
to
Le 01/10/2020 à 11:46, Paul Rubin a écrit :
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>>> You have that either way.
>> That is the point.
>
> With a conventional OS you have those synchronization delays AND
> you have address translation delays. With Singularity you still
> have the first, but you get rid of the second.
Address translation is also a security feature. One process can't access
data of another process. What about this with Singularity ?

>
>> So, where is performance gain? You still need storing/restoring
>> registers and other context's data upon preemting.
>
> You run millions of instructions between preemptions, but you take the
> address translation delay on EVERY memory access.
TLB tables are reloaded when switching from one process to another. If
the switched in process is swapped on disk, then you get a big
performance hit.
I don't get the point on translation delay on every memory access since
the address translation is done by efficient specialized hardware.

>
>> Are you going to recompile and re-link everything in
>> absolute addresses every time anything changes and then reboot?
>
> Position independent code is a thing.
>
>> Neither can do anything about it. You either have abstraction, like
>> flat contiguous address space, however implemented, or you do not. The
>> penalty is always there. You can have some relief from the hardware or
>> none.
>
> Shrug, maybe there is some kind of block allocator like in the old days.
> The original purpose of virtual memory was to allow simulating big ram
> by paging to disk. Nobody cares about that any more.
Really ?
Have you had a look at Ressource manager, memory tab in Windows 10 ?
You might be surprised. Even when everything fits in physical RAM, there
are parts of (not used) allocated memory stored on disk. This way
launching a new process is faster.
At work, I use programs that require big amounts of RAM. Without disk
swapping, I could not run them (16GB of physical RAM on the machine).
We still do care virtual memory.

Brian Drummond

unread,
Oct 1, 2020, 5:36:40 PM10/1/20
to
On Wed, 30 Sep 2020 21:42:21 +0200, Dmitry A. Kazakov wrote:

> On 30/09/2020 19:27, Paul Rubin wrote:
>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>>> OK, but that again is rather retrograde, MS-DOS pops into my mind
>>> (:-)).
>>
>> MSDOS had no memory protection at all, and was basically single
>> tasking.
>> Singularity had the limitation that you were only allowed to use
>> trusted compilers, but in exchange it gave an interesting approach to
>> programming high performance multiprocessor systems.
>
> Put a trusted compiler into MS-DOS, where is a difference? Tasking would
> be up to the compiler's run-time, obviously.
>
> I want an OS protecting from compilers I do not trust without
> performance loss. Static checks must be enforced at run-time.

Maybe I should knock together a new Linn Rekursiv on an FPGA.
https://en.wikipedia.org/wiki/Rekursiv

Objects were essentially memory segments, together with their own object
number, type, size : static checks happened in parallel with operations.
Even inheritance was handled below the instruction set level (in
microcode)

-- Brian

Randy Brukardt

unread,
Oct 1, 2020, 5:54:13 PM10/1/20
to
"Paul Rubin" <no.e...@nospam.invalid> wrote in message
news:87o8lml...@nightsong.com...
...
>> Are you going to recompile and re-link everything in
>> absolute addresses every time anything changes and then reboot?
>
> Position independent code is a thing.

Sure, but it's impractical on the Intel architectures as the machines are
too register-poor to use them unnecessarily.

Besides, it doesn't seem necessary; the loader can translate addresses when
an executable is installed. (That's commonly done in .EXE files.) Most data
is indirectly accessed anyway (stack or heap based), so the actual address
it lives at isn't relevant (and the rest can be translated when loading).

Dmitry said:
> You either have abstraction, like flat contiguous address space, however
> implemented, or you do not.

Flat address spaces are greatly overrated. It's much better to organize
programs as a series of separate segments anyway. No one should be reading
code or executing data anyway, so why should they share an address space??
The problem with 1980's segments was that they were too small, requiring
lots of juggling of segments, but that's not a problem on more modern
architectures.

To me, a "flat address space" is precisely destroying abstraction. Code /=
Data and they should not be treated the same. (Many of the problems with
malware occur precisely because it is easy to execute data.).

Randy.


Randy Brukardt

unread,
Oct 1, 2020, 6:10:13 PM10/1/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:rl4thi$hdq$1...@gioia.aioe.org...
...
>>> If so, then let there be protected procedure P1 of the object O1 and P2
>>> of O2. If P1 and P2 call to P3 of O3 that would be a problem. Ergo ether
>>> wandering or concurrent protected protected calls must be illegal.
>> But it's not the case...
>
> The scenario is: Task1 calls P1 on O1. Task2 calls P2 on O2. Both P1 and
> P2 call Protected_Trace on the protected Tracer object. If tasks occupy
> two different cores, would/should one suspend (in order not use the
> reserved word "block") another?

A task has to wait to get access to a PO. This is *not* blocking, it is not
allowed to do anything else during such a period. (This is why protected
operations are supposed to be fast!). It's canonically implemented with a
spin-lock, but in some cases one can use lock-free algorithms instead.

For a single core, one can use ceiling locking instead (and have no
waiting), but that model seems almost irrelevant on modern machines.

Randy.


Randy Brukardt

unread,
Oct 1, 2020, 6:13:17 PM10/1/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:rl4uql$1569$1...@gioia.aioe.org...
> On 01/10/2020 17:38, Niklas Holsti wrote:
>
>> If the protected procedures belong to different protected objects, yes it
>> is legal. But not if they belong to the same object, as J-P noted.
>
> But then you have a problem when two independently running protected
> procedures of *different* objects call a procedure of a third object. You
> must serialize these calls, and that is effectively blocking.

Not really: blocking inplies task scheduling (and possible preemption and
priority inversion), whereas no scheduling happens on a protected call.
There's just a possible wait. It's a subtle difference, admittedly, but it
makes a world of difference to analysis.

Randy.


Randy Brukardt

unread,
Oct 1, 2020, 6:21:05 PM10/1/20
to
[Sorry about breaking the thread, my news server can't handle the depth of
replies on this thread.]

"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:rl543q$1om5$1...@gioia.aioe.org...
> On 01/10/2020 19:01, Niklas Holsti wrote:
>> On 2020-10-01 19:06, Dmitry A. Kazakov wrote:
>>> On 01/10/2020 17:38, Niklas Holsti wrote:
>>>
>>>> If the protected procedures belong to different protected objects, yes
>>>> it is legal. But not if they belong to the same object, as J-P noted.
>>>
>>> But then you have a problem when two independently running protected
>>> procedures of *different* objects call a procedure of a third object.
>>> You must serialize these calls, and that is effectively blocking.
>>
>>
>> I don't know what you mean by "effectively", here, but yes, one of the
>> tasks must wait for the other task to complete the protected operation on
>> the third object. So what?
>
> It looks quite complicated to implement, e.g. checking barriers and doing
> locking when you are already done that.
>
> Now let's continue the example. What happens when the calling paths are:
>
> O1.P1 --> O3.P3 --> O2.Q
>
> O2.P2 --> O3.P3 --> O2.Q

This latter path is always going to deadlock, since the second call to O2 is
necessarily an external call (you're inside of O3, not O2). An external call
has to get the lock for the protected object, and since the lock is already
in use, that will never proceed.

[If O3 was nested in O2, then the second call to O2 could be internal. But
in that case, the first path would be impossible as O1 could not see O3 to
call it.]

Remember that the decision as to whether a call is internal or external is
purely syntactic: if a protected object is given explicitly in the call, one
needs to trigger the mutual exclusion mechanisms again. The only time one
doesn't need to do that is when the call does not include the object (that
is, directly from the body of an operation).

Randy.



Paul Rubin

unread,
Oct 1, 2020, 7:12:16 PM10/1/20
to
DrPi <3...@drpi.fr> writes:
> Address translation is also a security feature. One process can't
> access data of another process. What about this with Singularity ?

The compiler statically verifies that you can't access data of other
processes. All the code is written in a dialect of C# that is analagous
to SPARK/Ada, that does the verification. Also, it's not a consumer
operating system that runs all kinds of hostile applications, browsers,
etc.

> TLB tables are reloaded when switching from one process to another.

If those tables are not there in the first place, there is nothing to
reload.

> I don't get the point on translation delay on every memory access
> since the address translation is done by efficient specialized
> hardware.

It still adds a delay, especially in the case of a cache miss.

> Have you had a look at Ressource manager, memory tab in Windows 10 ?

This isn't about Windows. If your application needs data on disk, it's
better to manage that yourself rather than leave it up to the OS, which
knows nothing about your application's behaviour. That's in any rate
similar to the argument that Ada users make against garbage collection.
OTOH, it looks like Singularity is garbage collected, so there's that.

Paul Rubin

unread,
Oct 1, 2020, 7:14:18 PM10/1/20
to
"Randy Brukardt" <ra...@rrsoftware.com> writes:
> Sure, but it's impractical on the Intel architectures as the machines are
> too register-poor to use them unnecessarily.

That was before the 64-bit extensions. They have 16 GPR's now, plus a
lot of special purpose (XMM etc.) registers where you can put stuff.

J-P. Rosen

unread,
Oct 2, 2020, 1:36:09 AM10/2/20
to
Le 01/10/2020 à 17:44, Dmitry A. Kazakov a écrit :
>> No. Protected objects guarantee that only one task at a time can
>> be inside (ignoring functions). Multi-cores don't come into play.
>
> Inside one protected object, or inside any protected object? Is it
> effectively one single protected object and all protected object are
> its facets.
Inside on protected objects

> The scenario is: Task1 calls P1 on O1. Task2 calls P2 on O2. Both P1
> and P2 call Protected_Trace on the protected Tracer object. If tasks
> occupy two different cores, would/should one suspend (in order not
> use the reserved word "block") another?
To continue on Randy's response: mutual exclusion is not blocking.
"Blocking" (as in "potentially blocking operation") means "being put on
a queue", i.e. when the waiting time is potentially unbounded. The
waiting time due to mutual exclusion is bounded by the execution time of
the protected operation, and then can be included in the excution time
of the waiting task. (In reality, it can be slightly more complicated,
but the idea is that it is bounded).

>> As surprising as it may seem, allocators/deallocators are NOT
>> potentially blocking operations. But I understand your concerns...
>
> And this raises the same question. Pool must be interlocked, but not
> block. What is the semantics of this "non-blocking" interlocking on
> a multi-core machine?
>
Once again, allocators need mutual exclusion, but not queuing, therefore
it's OK.

> But then you have a problem when two independently running protected
> procedures of *different* objects call a procedure of a third object.
> You must serialize these calls, and that is effectively blocking.
>
Waiting, not queuing.

--------
In summary, the model of PO is two levels:
1) mutual exclusion, which is not "blocking"
2) for entries: queuing, which is "blocking"

Once you realize this, it should make this whole thread clearer....

Dmitry A. Kazakov

unread,
Oct 2, 2020, 2:56:00 AM10/2/20
to
Is that implementation or requirement? The lock can be task-re-entrant.

> [If O3 was nested in O2, then the second call to O2 could be internal. But
> in that case, the first path would be impossible as O1 could not see O3 to
> call it.]
>
> Remember that the decision as to whether a call is internal or external is
> purely syntactic: if a protected object is given explicitly in the call, one
> needs to trigger the mutual exclusion mechanisms again. The only time one
> doesn't need to do that is when the call does not include the object (that
> is, directly from the body of an operation).

Even when the object in the call is statically known to be same?

Dmitry A. Kazakov

unread,
Oct 2, 2020, 2:56:41 AM10/2/20
to
On 02/10/2020 07:36, J-P. Rosen wrote:

> To continue on Randy's response: mutual exclusion is not blocking.
> "Blocking" (as in "potentially blocking operation") means "being put on
> a queue", i.e. when the waiting time is potentially unbounded.

It would be a poor definition, because deadlock in not bounded as well.
If jumping from one protected object to another is legal, we can
construct a deadlock out of mutual exclusion. We also have a situation
when multiple tasks executing protected procedures are awaiting their
turn to enter a procedure of some object. They will continue (if not
deadlocked) in some order, which is obviously a queue.

Dmitry A. Kazakov

unread,
Oct 2, 2020, 2:56:47 AM10/2/20
to
Wow, I newer heard about it. It is pretty close to the general idea. And
surely SmallTalk is not the right OO model for the stuff.

[UK was leading innovations that time. Inmos' transputers is an example.]

J-P. Rosen

unread,
Oct 2, 2020, 3:42:05 AM10/2/20
to

Le 02/10/2020 à 08:56, Dmitry A. Kazakov a écrit :
> On 02/10/2020 07:36, J-P. Rosen wrote:
>
>> To continue on Randy's response: mutual exclusion is not blocking.
>> "Blocking" (as in "potentially blocking operation") means "being put on
>> a queue", i.e. when the waiting time is potentially unbounded.
>
> It would be a poor definition, because deadlock in not bounded as well.
> If jumping from one protected object to another is legal, we can
> construct a deadlock out of mutual exclusion.
But this would necessarily involve an "external call to the same
protected object", which is defined as a potentially blocking operation.
Note that AdaControl is quite powerful at detecting that situation (by
following the call graph).

> We also have a situation
> when multiple tasks executing protected procedures are awaiting their
> turn to enter a procedure of some object. They will continue (if not
> deadlocked) in some order, which is obviously a queue.

No, it can be implemented with a spin lock. It is bounded by the number
of waiting tasks x service time. You don't have to wait for some
unpredictable barrier.

Brian Drummond

unread,
Oct 2, 2020, 2:34:24 PM10/2/20
to
Yes, the transputer was another one. Even before considering parallelism
its single CPU performance (not much talked about) was quite impressive,
considerably higher than the ARM, and much better code density.

But on OO, Smalltalk was basically what there was, at the time (though
C++ was a contemporary experiment, as was Self, and though I never heard
of it till later, Python.

The Rekursiv's own language, Lingo, had more familiar syntax than
Smalltalk and could have rivalled Python (though the project died before
my 8086 implementation was complete).

There was a lot about the processor that could have been improved in a
second pass : as a first attempt it gave a lot away in performance to
concentrate on the fundamentals.

But it did prove that type safety and dynamic binding are not mutually
exclusive, and that hardware support enforcing at runtime a lot of
correctness [1] that Ada-95 did later at compile time, was possible.

It was "huge and complex" by the standards of the day to add e.g. bounds
and type checks in parallel with useful stuff ... like, 70000 gates when
RISC CPUs were 20000. But that would be vanishingly small today.

[1] Correct if you trusted or separately verified the microcode. A high
integrity Rekursiv would have to prohibit modifying microcode.

A lot of the ideas you outline here sound quite similar to those of David
Harland, its architect. Maybe they will come back into fashion someday...

-- Brian

Paul Rubin

unread,
Oct 2, 2020, 5:24:59 PM10/2/20
to
Brian Drummond <br...@shapes.demon.co.uk> writes:
> The Rekursiv's own language, Lingo... prove[d] that type safety and
> dynamic binding are not mutually exclusive, and that hardware support
> enforcing at runtime a lot of correctness [1] that Ada-95 did later at
> compile time, was possible.

1) I don't know if the Rekursiv predated the Lisp machine, but maybe
that did about the same?

2) Runtime checks don't really "count" for critical applications. They
just ensure an orderly runtime crash if a type error manifests, but Ada
is designed for writing applications that are simply not allowed to
crash.

3) I wonder where the iAPX-32 comes into this. I think it was designed
to run Ada, but I don't know much more about it. It turned out to be
complicated enough that it had to be spread across 3 chips using the
process technology of the time, and the inter-chip communication made it
too slow to use. But I wonder, these days, if a single chip
implementation would take as large a speed hit vs. conventional
architectures as the 3 chip version did.

Randy Brukardt

unread,
Oct 2, 2020, 11:09:25 PM10/2/20
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:rl6its$10h7$1...@gioia.aioe.org...
...
>>> Now let's continue the example. What happens when the calling paths are:
>>>
>>> O1.P1 --> O3.P3 --> O2.Q
>>>
>>> O2.P2 --> O3.P3 --> O2.Q
>>
>> This latter path is always going to deadlock, since the second call to O2
>> is
>> necessarily an external call (you're inside of O3, not O2). An external
>> call
>> has to get the lock for the protected object, and since the lock is
>> already
>> in use, that will never proceed.
>
> Is that implementation or requirement? The lock can be task-re-entrant.

Language requirement. An external call requires a separate mutual exclusion.
If Detect_Blocking is on, then Program_Error will be raised. Otherwise, any
pestilence might happen.


>> [If O3 was nested in O2, then the second call to O2 could be internal.
>> But
>> in that case, the first path would be impossible as O1 could not see O3
>> to
>> call it.]
>>
>> Remember that the decision as to whether a call is internal or external
>> is
>> purely syntactic: if a protected object is given explicitly in the call,
>> one
>> needs to trigger the mutual exclusion mechanisms again. The only time one
>> doesn't need to do that is when the call does not include the object
>> (that
>> is, directly from the body of an operation).
>
> Even when the object in the call is statically known to be same?

Yes. An external call *always* gets the lock again. I believe that was made
the rule to make it obvious as to what will happen based on the form of
call.

Randy.


Randy Brukardt

unread,
Oct 2, 2020, 11:14:53 PM10/2/20
to
"J-P. Rosen" <ro...@adalog.fr> wrote in message
news:rl6lka$ccv$1...@dont-email.me...
>
> Le 02/10/2020 à 08:56, Dmitry A. Kazakov a écrit :
>> On 02/10/2020 07:36, J-P. Rosen wrote:
>>
>>> To continue on Randy's response: mutual exclusion is not blocking.
>>> "Blocking" (as in "potentially blocking operation") means "being put on
>>> a queue", i.e. when the waiting time is potentially unbounded.
>>
>> It would be a poor definition, because deadlock in not bounded as well.
>> If jumping from one protected object to another is legal, we can
>> construct a deadlock out of mutual exclusion.
> But this would necessarily involve an "external call to the same
> protected object", which is defined as a potentially blocking operation.

Note that such an operation doesn't really block, it is a deadlocking
operation; Ada lumped it into "potentially blocking" in order to save some
definitional overhead. (A mistake, in my view, it should simply have been
defined to raise Program_Error or maybe Tasking_Error.) "Potentially
blocking", in normal use, means something else.

Randy.


Dmitry A. Kazakov

unread,
Oct 3, 2020, 2:42:06 AM10/3/20
to
On 03/10/2020 05:09, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:rl6its$10h7$1...@gioia.aioe.org...

>>> [If O3 was nested in O2, then the second call to O2 could be internal.
>>> But in that case, the first path would be impossible as O1 could not see O3
>>> to call it.]
>>>
>>> Remember that the decision as to whether a call is internal or external
>>> is
>>> purely syntactic: if a protected object is given explicitly in the call,
>>> one
>>> needs to trigger the mutual exclusion mechanisms again. The only time one
>>> doesn't need to do that is when the call does not include the object
>>> (that
>>> is, directly from the body of an operation).
>>
>> Even when the object in the call is statically known to be same?
>
> Yes. An external call *always* gets the lock again. I believe that was made
> the rule to make it obvious as to what will happen based on the form of
> call.

I mean this:

protected body O is
procedure P1 is
begin
...
end P1;
procedure P2 is
begin
P1; -- OK
O.P1; -- Deadlock or Program_Error
end P2;
end O;

Dmitry A. Kazakov

unread,
Oct 3, 2020, 2:54:37 AM10/3/20
to
On 02/10/2020 20:34, Brian Drummond wrote:
> On Fri, 02 Oct 2020 08:56:44 +0200, Dmitry A. Kazakov wrote:
>
>> On 01/10/2020 23:36, Brian Drummond wrote:
>>> On Wed, 30 Sep 2020 21:42:21 +0200, Dmitry A. Kazakov wrote:
>
>>>> Static checks must be enforced at run-time.
>>>
>>> Maybe I should knock together a new Linn Rekursiv on an FPGA.
>>> https://en.wikipedia.org/wiki/Rekursiv
>>>
>>> Objects were essentially memory segments, together with their own
>>> object number, type, size : static checks happened in parallel with
>>> operations.
>>> Even inheritance was handled below the instruction set level (in
>>> microcode)
>>
>> Wow, I newer heard about it. It is pretty close to the general idea. And
>> surely SmallTalk is not the right OO model for the stuff.
>>
>> [UK was leading innovations that time. Inmos' transputers is an
>> example.]
>
> Yes, the transputer was another one. Even before considering parallelism
> its single CPU performance (not much talked about) was quite impressive,
> considerably higher than the ARM, and much better code density.

That time I had an idea of mapping Ada tasks onto individual transputers
and using links for rendezvous with marshaling arguments forth and back,
alas Inmos died before I even started.

> But on OO, Smalltalk was basically what there was, at the time (though
> C++ was a contemporary experiment, as was Self, and though I never heard
> of it till later, Python.
>
> The Rekursiv's own language, Lingo, had more familiar syntax than
> Smalltalk and could have rivalled Python (though the project died before
> my 8086 implementation was complete).
>
> There was a lot about the processor that could have been improved in a
> second pass : as a first attempt it gave a lot away in performance to
> concentrate on the fundamentals.
>
> But it did prove that type safety and dynamic binding are not mutually
> exclusive, and that hardware support enforcing at runtime a lot of
> correctness [1] that Ada-95 did later at compile time, was possible.

Right, Ada 95 OO was ingenious. In combination with visibility based on
packages rather than classes, it could be ideal for such type of OS.

> It was "huge and complex" by the standards of the day to add e.g. bounds
> and type checks in parallel with useful stuff ... like, 70000 gates when
> RISC CPUs were 20000. But that would be vanishingly small today.

Considering how many cores Nvidia RTX 3080 has..

> [1] Correct if you trusted or separately verified the microcode. A high
> integrity Rekursiv would have to prohibit modifying microcode.
>
> A lot of the ideas you outline here sound quite similar to those of David
> Harland, its architect. Maybe they will come back into fashion someday...

Yes.

Niklas Holsti

unread,
Oct 3, 2020, 3:45:03 AM10/3/20
to
That is an internal call, so no deadlock nor error.

See RM 9.5(4.e), which is this exact case.


>       end P2;
>    end O;
>

Dmitry A. Kazakov

unread,
Oct 3, 2020, 4:16:21 AM10/3/20
to
I.e. it is *not* based on the syntax of the call.

Anyway the rather disappointing result is that protected procedures may
deadlock (or Program_Error) in a legal program.

So my initial disinclination to jump from one protected object to
another is a reasonable advise. Or at least the order in which protected
objects are navigated must be same.
It is loading more messages.
0 new messages