I feel we are losing track of what I wanted to talk about. Imagine
memristors [1] work and that we can embed them into processors and get
persistent memory right on chip.
What kind of architecture would you build? Would it only be
programmable by another chip or would it be able to alter its own
state so that it could alter the program it instantiated?
There are many people working with networks of compute nodes that are
standard processors/cores with large amounts of off-CPU memory that
controls the program. I'm interested in things that are a little more
esoteric.
Will
I feel we are losing track of what I wanted to talk about. Imagine
memristors [1] work and that we can embed them into processors and get
persistent memory right on chip.
What kind of architecture would you build? Would it only be
programmable by another chip or would it be able to alter its own
state so that it could alter the program it instantiated?
There are many people working with networks of compute nodes that are
standard processors/cores with large amounts of off-CPU memory that
controls the program. I'm interested in things that are a little more
esoteric.
It would be nice if you referenced things somewhat. The closest I can
think of to what you are describing is Pig,
http://hadoop.apache.org/pig/. Although that does tend to go to
persistent storage.
The differences I can see are
1) You might want to be aware of how many (compute/memory) cells you
have left, so you can avoid having to swap around processes.
2) You wouldn't have to worry about cache in the same way.
http://lwn.net/Articles/252125/
3) As programs are persistent, you don't have to worry about the ELF
loader or equivalent getting taken over and trojaning all your code on
a reboot.
4) Real-time guarantees are easier to give.
I think we are coming at things from different directions. I'm
interested in things from a robotics/potentially hostile code
environment. You, I'm guessing, are coming from a clean and
controllable data centre/data processing point of view.
>>
>> What kind of architecture would you build? Would it only be
>> programmable by another chip or would it be able to alter its own
>> state so that it could alter the program it instantiated?
>
>
> Honestly, I would not change too much from what is available now, maybe
> tweak some of the design elements and update certain aspects of the cores.
> Also, I might extend the integer instructions to support multidimensional
> integer operands. Some functional programming constructs in the C++ would be
> nice e.g. continuations.
So we were lucky and we have found the processor architecture early on
in computers development that will last us forever and under all
conditions? People will still be using something similar 100 or a
1000 years in the future?
> Just about anything you can imagine is available now if you know where to
> shop. Not memristors specifically but they are a relatively narrow (but
> cool) optimization. For a given silicon budget, everything is a tradeoff.
> Reconfigurable systems are interesting but are only narrowly competitive
> when faced with systems with a lot of different functional units connected
> by a fast fabric and a programmer that knows how to exploit them.
Is that due to the fact that all our skills, knowledge and R&D is
concentrated in these types, or because they are intrinsically better?
>> There are many people working with networks of compute nodes that are
>> standard processors/cores with large amounts of off-CPU memory that
>> controls the program. I'm interested in things that are a little more
>> esoteric.
>
>
> Define "esoteric". Dynamically reconfigurable computing has its place, but
> it has practical drawbacks. I'm familiar with a number of systems that work
> like this and it is less compelling in practice than you might think.
Do our own neurons suffer from the same practical drawbacks? If not
can we mimic what they do differently, while keeping the
understandability of silicon? I don't think that the space of
reconfigurable computing has been fully explored, so I would like to
see what can be done.
Will
I do like the dataflow style languages. They abstract away a bit too
much for my liking (at least in their current forms) for system level
programming.
For example. how would you reference transforms in such a way that
they can be (re)programmed in a dataflow language?
Will
It would be nice if you referenced things somewhat. The closest I canthink of to what you are describing is Pig,
http://hadoop.apache.org/pig/. Although that does tend to go to
persistent storage.
So we were lucky and we have found the processor architecture early on
in computers development that will last us forever and under all
conditions? People will still be using something similar 100 or a
1000 years in the future?
> Just about anything you can imagine is available now if you know where toIs that due to the fact that all our skills, knowledge and R&D is
> shop. Not memristors specifically but they are a relatively narrow (but
> cool) optimization. For a given silicon budget, everything is a tradeoff.
> Reconfigurable systems are interesting but are only narrowly competitive
> when faced with systems with a lot of different functional units connected
> by a fast fabric and a programmer that knows how to exploit them.
concentrated in these types, or because they are intrinsically better?
Do our own neurons suffer from the same practical drawbacks? If notcan we mimic what they do differently, while keeping the
understandability of silicon? I don't think that the space of
reconfigurable computing has been fully explored, so I would like to
see what can be done.
> Basically, it is a CPU optimized for extremely fine-grained parallelism in a
> distributed memory environment at the expense of efficiency at cache-local
> sequential processing. There are some other new processor designs based on
> the same ideas (deep latency hiding). These architectures are extremely
> efficient for graph and data flow models relative to more conventional
> processors.
Thanks, interesting.
I'm a bit busy at the moment. I'll get back to you in a bit. Just to
say that I suspect that my search for a better paradigm is mainly due
to my interest in security. I want to do things like avoiding single
points of failure. Yup you can simulate that, but if you can are
always using the same security system it would be better for it to be
embodied in the system architecture for efficient.
These things can be safely ignored by super computer designers for the
most part :) And so add an extra variable in the phase space of
systems I am considering.
Will
I'm a bit busy at the moment. I'll get back to you in a bit. Just tosay that I suspect that my search for a better paradigm is mainly due
to my interest in security. I want to do things like avoiding single
points of failure. Yup you can simulate that, but if you can are
always using the same security system it would be better for it to be
embodied in the system architecture for efficient.
These things can be safely ignored by super computer designers for the
most part :) And so add an extra variable in the phase space of
systems I am considering.
On 23 July 2010 09:13, J. Andrew Rogers <jar.m...@gmail.com> wrote:
> wrote:
>
> Eliminating single points of failure is a big deal with supercomputing
> systems. When you start pushing a million compute elements failures tend to
> be routine. It is not a big problem, just select an appropriate level of
> redundancy and/or redo for the cluster.
I meant single point of failure in the sense of if a malicious bit of
code got control of a single resource, the system would go down of be
highly adversely affected. So no flat memory models, no
uninterruptable process, no Ring 0.
> Security is almost an orthogonal matter. Hardened network fabric
> technologies exist but they are not exactly low latency if you are using
> them as a distributed computing fabric. At the level of an individual
> compute node, security granularity is expensive and it depends on how
> decentralized you want your authority. Pervasively decentralized authority
> is a Hard Problem.
What is your opinion on the old capability model?
http://en.wikipedia.org/wiki/Capability-based_security
It seems a good match for decentralized authority.
Will
I meant single point of failure in the sense of if a malicious bit of
code got control of a single resource, the system would go down of be
highly adversely affected. So no flat memory models, no
uninterruptable process, no Ring 0.
What is your opinion on the old capability model?
http://en.wikipedia.org/wiki/Capability-based_security
It seems a good match for decentralized authority.
I was going with the wikipedia definition
"Flat memory model or linear memory model refers to a memory
addressing paradigm in low-level software design such that the CPU can
directly (and sequentially/linearly) address all of the available
memory locations without having to resort to any sort of memory
segmentation or paging schemes."
I suspect it is one of these things that has multiple usages.
>> What is your opinion on the old capability model?
>>
>> http://en.wikipedia.org/wiki/Capability-based_security
>>
>> It seems a good match for decentralized authority.
>
>
> Capabilities are great. The caveat is that they are expensive in a
> distributed environment and in the case of a pervasively decentralized
> network assume no bad actors.
Yeah I don't tend to think too distributed (e.g the capabilities are
still opaque to the system).
> A very, very big problem with pervasively decentralized authority is Nash
> equilibria. The strategies available that are required for a pervasively
> decentralized authority to work are incredibly brittle; deviate from those
> strategies even a little bit and you will end up with pathological or
> effectively centralized behaviors in systems where the user may nonetheless
> be benign. Guaranteeing a stable and efficient equilibria in a pervasively
> decentralized system is a problem we only know how to solve under very
> strict constraints that do not apply to many applications.
Interesting I didn't know there was work along those lines. I'll have
to read more on the subject.
Will
I was going with the wikipedia definition
"Flat memory model or linear memory model refers to a memory
addressing paradigm in low-level software design such that the CPU can
directly (and sequentially/linearly) address all of the available
memory locations without having to resort to any sort of memory
segmentation or paging schemes."
I suspect it is one of these things that has multiple usages.
> A very, very big problem with pervasively decentralized authority is NashInteresting I didn't know there was work along those lines. I'll have
> equilibria. The strategies available that are required for a pervasively
> decentralized authority to work are incredibly brittle; deviate from those
> strategies even a little bit and you will end up with pathological or
> effectively centralized behaviors in systems where the user may nonetheless
> be benign. Guaranteeing a stable and efficient equilibria in a pervasively
> decentralized system is a problem we only know how to solve under very
> strict constraints that do not apply to many applications.
to read more on the subject.