IXM and the future of computing

11 views
Skip to first unread message

Vincent Van Laak

unread,
Jan 6, 2010, 1:35:51 PM1/6/10
to Illuminato
Hi folks,

Now, full disclosure, I'm not an IXM developer. However, the first
time I heard about IXM on slashdot it immediately got my heart going,
as I've had ideas for several years that are closely aligned with the
IXM, in particular its modular, hot-swappable aspects, and I wanted to
share a little bit of that with you in the hopes of inspiring a deeper
conversation about it and possibly starting a new direction for the
IXM or related hardware.

My vision of the future is best summed up as "modular computing",
although as often happens with visions, there are lots of little
details here and there that make it more interesting than just that.
However, for now, let's imagine for a moment that the computer no
longer had a motherboard. Your processor and memory are on one
island, but instead of all the hardwired busses--USB, PCI, SATA, etc--
there's a high speed connection to the rest of the computer, which is
only revealed to the OS as a networking interface provided by a
standardized chipset.

When elsewhere in the system, an input event arrives, it is handled on
a different island--the one it came in on. Drivers running locally
scan the event, compare it to a table of who should receive events
(which is updated when new modules are added, or in response to
software events), package it, and send it off.

Meanwhile, the OS running on the processor in question has finished
its graphical update for this tick, and sends image diffs for any
updated window to a graphics chipset on another island, where a
compositor is run locally, determining the window order, what's
output, and so on. That graphics chipset also receives updated cursor
positions from the mouse input events--unless they've been turned off--
and updates the cursor position, even possibly doing mouseover/
mouseout event tracking and sending those events back to the processor
itself.

Then imagine that there are two processors, running two separate OSes
completely in parallel, both managed the same way, with only a fairly
slim native OS running on and between the modules, determining when
one OS is dominant, and when the other is, from input events that
neither see. Even imagine that since both do their window management
via the native OS compositing, the windows for both OSes exist side-by-
side on the same screen, each never knowing that the other is right
nextdoor.

I think it's a good idea. I can see the way it would all work out,
even if I don't have the technical expertise to make it happen on my
own. However, a big part of the equation is a replacement
motherboard--a modular "backbone" that is high speed, regulates power,
navigates errors and manges hotplug events, and is to the guest OS
apparently atomic. To my mind--and admittedly I may be a bit
premature in saying this--IXM is on the road to being that sort of
technology.

I would absolutely love to talk more about this--you can probably tell
from how long I went on about it--and I would love to be able to work
on it, though I'm not sure what I can offer in terms of raw technical
talent, especially at the moment. I think I'd most love to be able to
get the concept to a point where it could inspire people to invest in
it--either financially or as an open source project.

Anyway, thanks for listening, and I hope things go well for the IXM
whether my ideas work out or not.
-Vincent

Matt Stack

unread,
Jan 7, 2010, 12:59:44 AM1/7/10
to illum...@googlegroups.com
This is truly coincidental! I am in heated discussions with a couple of guys
in different email threads literally as we speak, about developing some
micro and nano-sized operating systems that would be natively and truly
parallel in nature. This isn't quite your description of a dream high level
OS, but maybe it would serve as some very basic fundamental building
blocks...

Let me try my best to summarize some consensus thoughts:

-Some of the first operating systems made little distinction between memory
and code, it all blended together
-That paradigm actually works quite nicely for parallel computing, e.g.
parallelization of tasks
-Anything you can do to distribute code across a system, and avoid having to
"compile it" when you distribute it is ideal
-Assembly language could work, but that's really low level
-C needs compilation, so that doesn't really work unless you have a compiler
sitting at every chip or core
-Mid-tier languages like Occam, Lisp, and especially Forth are great for
this low-level operating system
-Forth, in fact, is so good, that it was the BIOS-like OS of choice on
PowerPC-based systems in the 90's, e.g. all Mac's, BeBoxes, SGI systems,
etc.
-Current programming languages don't really work at a natively parallel
level, hence the Cambrian explosion in language extensions and message
passing interfaces we see these days
-Message passing interfaces aren't necessarily the answer either, given they
are "bolt-ons"
-We probably need to start somewhere small, and climb up the system stack
layer somewhat organically
-A question: is C here forever? Should it be? Is that the best language for
parallelism?

Check out the attached powerpoint presentation...

:-)

Just thinking out loud,

Matt

Disruptive programming languages.ppt

Vincent Van Laak

unread,
Jan 7, 2010, 6:34:52 PM1/7/10
to Illuminato
If I'm understanding you right, you're talking about times when an
additional processor is hotplugged into the system, and revealed to
each of the guest OSes as a place to do additional computation on a
request basis. (I'm confirming because I thought you said something
else and wrote a reply before I ditched that idea)

However I think that in large part the role of this system is not to
provide more GHz, but to make the system more flexible. I wrote
before about the window system compositing being done independent of
the guest OS; you can also imagine a small dedicated processor being
plugged in that does nothing except run a process securely (that is,
without revealing the code or memory to the system) and forwarding the
output to the rest of the system, perhaps encrypted, thereby
preventing, for example, a game from being ripped to disk. You could
also create a module specifically to have the most optimized processor
for your application's needs possible, if you like; for example, one
which does floating point transforms natively; since no other part of
the OS needs to run on this specialized processor, it could be put in
without worrying about compatibility, so long as it properly networks
with the rest of the system.

And to that degree your point IS interesting; imagine for example that
processors revealed to the system received bytecode which was then
compiled to be optimized with whatever processor it had. You could
argue, having done that, that new processor architectures could be
created and distributed without any of them trying to remove x86 from
the equation. Possibly, having done that, we'd run across one that
was wholly superior to x86 and we'd all someday switch to it.

And that's in general part of the thing that I like most about a
modular computer, and that is that once the interface with the rest of
the computer is entirely software, you can then start inventing new
hardware--processors, I/O, whatever else--and they will be treated the
same as old kinds of hardware by any programs except those who know
better.

Having completely gone in a circle around my original point I think
I'm going to have to look at that ppt again :)
-V

>  Disruptive programming languages.ppt
> 510KViewDownload

Reply all
Reply to author
Forward
0 new messages