Skybuck Flying
unread,Sep 29, 2015, 3:22:31 AM9/29/15You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to
Hello,
(Before reading this make sure to first read Skybuck's Parallel Universal
Code, this posting below builds on it).
(Now that I have published Skybuck's Parallel Universal Code, I will also
re-publish this (parallel) universal computer idea in case you missed it
deeply nested in another thread ! ;) yeah it/this deserves it's on
topic/subject line !;))
Consider this:
Skybuck's (Parallel) Universal Computer (Implementation) Description
Version 0.01 created on 20 september 2015 by Skybuck Flying
(Also immediatly published on usenet ! ;) :) though not under a seperate
thread lol... may have to re-post this later when I post my parallel
universal code :))
(I should probablt also first work on an encoding/sorting thing before
attempting to create this computer... just to be sure encoding is possible
too and works well ! ;) :))
Before I describe/produce a dummy decoder example for you I will first
explore a much more important topic:
Now that a somewhat efficient solution has been found does this open up the
way to a truely Universal Computer ?!
The answer might be yes, thus I shall explore this topic much more !
First I shall describe the problem and then show how this solution could be
used to create a somewhat efficient and easily universal computer.
The problem with the universal computer is as follows:
How to scale values up ? How to make them grow without overwriting other
fields ?
Example of problem:
a1, a2, a3, b1, b2,
If FieldA would grow by 1 bit it would overwrite b.
This creates a very shitty and complex problem at first sight.
b would need to moved, a would need to be reallocated and so forth ?!
Now how to solve this problem ? Now let's take a look how this solution can
be used to create a much easier solution.
The first step apperently is to "extract" / "unpacked" / "seperate" the bits
from each other.
This is done by storing them vertically so to speak, imagine a 2D storage
medium:
a1 b1
a2 b2
a3
Now it's easy to see how A could grow easily... without overwriting B.
a1 b1
a2 b2
a3
a4
a4 is simply added vertically, the same can be done with all other fields.
Thus all problems like that solved.
However let's look at slight more complex example;
a1 a2 a3 a5 b1 b2 b3 c1 d1 d2 d3 d4 d5 s6
a1 b1 c1 d1
a2 b2 xx d2
a3 b3 xx d3
a4 xx xx d4
xx xx xx d5
xx xx xx d6
All fields can grow easily vertically.
However the solution to actually work requires sorting (on field length) for
example from large to small:
d1 a1 b1 c1
d2 a2 b2 xx
d3 a3 b3 xx
d4 a4 xx xx
d5 xx xx xx
d6 xx xx xx
Now let's look what happens if B grows by 2.
d1 a1 b1 c1
d2 a2 b2 xx
d3 a3 b3 xx
d4 a4 b4 xx
d5 xx b5 xx
d6 xx xx xx
^ The fields are no longer sorted on size and thus need to be re-sorted:
d1 b1 a1 c1
d2 b2 a2 xx
d3 b3 a3 xx
d4 b4 a4 xx
d5 b5 a5 xx
d6 xx xx xx
Now the question is how can sorting be done ?
The following information is available from the encoded format:
RowCount
RowLengths
DataBits
There is no true sorting/order information. I am also not sure if it's
actually required.
A universal computer could for example simply resort after every growth just
to be sure it's in an ok state ?!
But what if we want to reference a variable ?! If their positions suddenly
change that might be a problem ?!
The idea/solution I had before writing this posting is probably to use an
array which indicates the order of the fields.
So we would need an array containing "order" information which would look as
follows.
It would be a "lookup" array.
One can also assume that A is index 0, B is index 1, C is index 2, D is
index 3, not necessary yet to encode them as such but to give further ideas
for how to implement it.
Anyway in my description below I will keep using A,B,C,D for now and simply
write the table/array under it:
The order information is 0 to 3, 0 being largest=first and 3 being
smallest=last
A B C D
2 1 3 0
From this order table it can be determined where these fields are located in
the vertical data table above... or in other words, in which column they
reside in.
Thus when sorting the only thing necessary to do is update this order table
as well.
This way by using the order table these fields can be addressed in an
indirect way.
The order table points towards where the fields are.
Thus I believe this is a solution for a universal computer.
If the encoded format/storage is to be maintained then I guess this would
require re-packing of the information/data fields after the grow/resorting.
So basically the procedure to grow one of them fields could look as follows:
1. Grow whatever field you would like.
2. Pretend the field actually grew without actually growing it.
3. Re-sort the table pretending the field grew. Also update the order table.
4. Re-pack the table as to include a free bit for the field that grew.
5. Store the new data bit in the grown bit field.
As far as I can tell this seems like the first truely practicle and somewhat
efficient algorithm that could make this universal computer actually work
and allow fields to grow.
A drawback of this could be that the entire program/data structure has to be
moved/repacked by just 1 bit...
The larger the program the more data needs to be moved... that's somewhat
unfortunate...
Perhaps additional ideas/solutions could be incorporated to relieve some of
this data growing and data shrinking stress.
Perhaps even exponentially.
One possible idea could be to also keep track of a "grow" and "shrink"
table.
Which indicates if a field grew or shrank lol... the last time... like some
sort of cache table describing what kind of operations where done.
Thus to give some indication of what is to be expected... like a prediction
table. This prediction table could then be used to grow much further than
currently needed.
Perhaps exponential is pushing it... but could be interesting.. or perhaps
slightly exponentiall or perhaps some kind of other formula for growing...
Perhaps first time +1, second time + 2, third time +4, fourth time + 10,
fiveth time + 20 additional bit spaces. or perhaps each time just +1.
Anyway let's first describe the "resize table" as I will call it:
A B C D
0 1 0 0
This table describe that last time B grew by 1.
Now let's suppose B wants to grow again.
This time the table is increment as follows:
resize table:
A B C D
0 2 0 0
2 bit could be allocated instead of just one... which just gave me another
idea.
Instead of calling it a resize table it could also be called a "free table"
or we could just add it and keep both... which also seems very interesting.
So there could also be an additional table which describes how much free
space each variable/field has:
free space/bits table:
A B C D
0 0 0 0
Here we could allocate additional bits for each field.
Thus the free space/bits table could be used to describe their total
length... versus their "data" length.
perhaps it's even usefull to include a data length table just in case as
well... to make processing even easier.
data table length:
A B C D
4 5 1 6
or we could rename free space/bits table to total length:
total length:
A B C D
4 5 1 6
Now let's suppose B wants to grow again...
Let's suppose the resize table was:
resize table:
A B C D
0 1 0 0
Last time B grew from 4 to 5.
Now let's suppose we want to grow it again by 1.
This time the resize table is used to allocate an extra bit because there is
already a 1 in it.
So the new tables will look like:
data table:
d1 b1 a1 c1
d2 b2 a2 xx
d3 b3 a3 xx
d4 b4 a4 xx
d5 b5 a5 xx
d6 b6 xx xx
xx -- xx xxx
-- = free bit space.
xx = does not exist. just place holder to keep ascii/text nicely readable/in
format/layout ok.
order table:
A B C D
2 1 3 1
resize table:
A B C D
0 2 0 0
total length table:
A B C D
5 7 1 6
data length table:
A B C D
5 6 1 6
Now the processor/decoding algorithm can see that B has one extra bit:
Total Length - Data Length = 1
This extra bit is to be ignored when reading/operating on B...
Data length of 6 should be used/assumed.
The extra bit is only there in case B needs to grow again... if it needs to
grow again... then at least the movements of data can be avoided.
Re-sorting may still need to take place.
Unless perhaps re-sorting is done on total length instead of data length...
not sure if that matter... perhaps it would even be better to do it on total
length
so the correct extra bit is always written when necessary... otherwise wrong
bit might be used... and all other fields beyond it may be off by 1.
So sorting on total length might actually be needed.
I think... from the looks of it... this could be a really nice first
solution to a practical Universal Computer... with added benefit... that
it's actually executable in parallel ! Which is also very interesting.
Plus it uses quite an efficient storage structure.
However the extra tables may take away from that somewhat... but such is
life... a computer has to do something =D it can't be free...
However at least the tables shouldn't be that much of a problem... or they
can be stored in main ram/memory instead of inside a computer chip. Though
that might make the whole thing slower...
Eventually some of these tables could end up in caches... or be explicitely
partially read or so.
Sorting might be a drawback... then again... with an efficient/fast and
parallel sorting algorithm it may not be that bad... though most sorting
algorithms are exponential... in the log quadrant so to speak ;)
Though radix sort is fast... not sure if radix sort could be used for
this... not sure if radix sort has a parallel version... merge sort should
be possible in parallel though...
The sorting itself would also require extra memory.. .so this somewhat
memory intensive but not to bad...
Some remaining question:
Would this design actually be faster than a "memory manager" with
"reallocation" and "movements" and so forth ?!?
Perhaps that question isn't that important... since nobody has ever created
anything like this... so there are no alternative designs ?!? Though one
could probably come up with some kind of complex design... but would it
actually be less complex than this ?
Probably not... the complexity of alternative solutions would probably
create a much slower solution... and would probably not even work in
parallel.
So perhaps this new design of mine is not to be under estimated in it's
potential/value ! ;) =D
I kinda like it... because it offers an actually implementation idea which
is practical, somewhat efficient computing wise, paralleizeable, storage
medium wise pretty darn efficient, and doable, comprehendeable/maintainable
complexity.
I'd believe/think that this might actually be an interesting design for
embedded systems to for example perhaps safe on memory space.
Since the packed storage format is pretty efficient.
Concerning the extra space bits... when transmitting or storing these
formats to other computers perhaps a special functionality could be added to
the universal computer to get rid of these extra bit spaces... and to
compact the stream to be as efficiently as possible.
Perhaps this is even an requirement since the other side has no way of
knowing if a bit is a data bit... or just extra space... it would either
require transmitting of the tables... which seems excessive... or a special
encoding which describes the extra bits...
The extra bits could be described as loose fields... but that probably not
good enough... it must be related to which fields can actually use that...
though all field could probably use it... though it seems most likely and
smart to assign it to the field that came before it.
Thus what is left is to describe if a field is a data field or a extra
field. Which would require 1 extra bit per field.
This is doable by interleaving one extra bit per field description, a meta
bit.
d1 b1 e1 a1 c1
d2 b2 xx a2 xx
d3 b3 xx a3 xx
d4 b4 xx a4 xx
d5 b5 xx a5 xx
d6 b6 xx xx xx
data table could look like this above... with extra fied singled out into
e1.
Since the first row includes all fields it feels most natural and is most
smart to superceed to first row with the meta row.
m will be used to indicate this meta data:
final encoding would look like
m1,m2,m3,m4,m5,
d1,b1,e1,a1,c1
d2,b2,a2,
d3,b3,a3
d4,b4,a4
d5,b5,
d6,b6
The rest of the final encoding should be easy to figure out... no additional
row length is needed, row 0 can also be applied to meta data m.
I will leave it at that for now ! ;)
This way the other computer can also benefit from the extra bit spaces and
learn from the other computer which fields are most likely to grow or
shrink...
the resize table could also contain
A B C D
0 -1 0 0
to indicate that B shrank.
I have not yet explorer what would happen if a field shrinks... but I would
expect the same operation/algorithm will solve it.
Thus probably no extra investigation into this necessary... but I may do so
at a later time if the need arises ! ;) =D
Bye,
Skybuck.