Subtracting 1/3 twice?
Figuring out how many bits in the floating point,
here the idea is to, ..., "subtract 1/3 twice, get
.333...333... somewhere 3444...", that the "somewhere"
was where the floating point unit, would extend.
I.e., it might be fair, "the subtracted 1/3 twice
each extended all the way down .333..., leaving .3444...,
only wherever the floating point unit is beyond precision".
I figure it would be close or same to any other usual calculations,
also "for all the bits in the FPU value what result its bits is
its representation".
I.e. the Floating Point Unit is much bigger than Fixed Point,
the register width usually enough 32/64 doubles and 80 bit intel floats.
Then though keeping the guarantees of fixed point, for floating point,
I mostly look at floating point for how to implement fixed point arithmetic,
and mostly I use the vector registers and about having that the MMX is
used for shifting and so on instead of multimedia along ... multiplying transforms.
Then, though again the floating point unit is bigger, it doesn't have to
do integer arithmetic, I'd like to think that the floating point units, are
given to statistics, for where words are small in small alphabets, in words,
while floating point values are "80 bits".
Which is good to have defined as "10 8 bits".
Also I forget that it's a real number in [0, infinity) not just [0,1],
where "infinity" excuse me is, "Not A Number", "NaN", Not-a-number.
I guess the 754 and 854 kind of define this, IEEE floats,
with platforms "do or don't", IEEE float.
Now I've also written a scritto and if you'll excuse me I'll write it here.
You know, with a C runtime, and socket, and, ..., a C runtime, all sorts C then C++ code compile, ....
Then, some, event machine, is that let's say that there is JavaScript runtime, and, there is
"static JavaScript runtime", in terms of, say, "C runtime, ...".
This is enough library most tools make a POSIX.
"Why old POSIX?" Yeah, "why old POSIX", ....
Tooling down POSIX, basically is for reflecting the derivation of POSIX,
in the line terminal and the setting. Then, there is for making in tooling,
what are the environment and configuration, what makes terminal,
results a terminal.
The console and the terminal, here for kernel mode user land,
reflects a usual enough runtime quite unremarkable.
Then though it results for the framework and platform, for basically giving off
the hardware access, what results that resources are actually well-defined again,
putting nice conventions all over what results "cooperative multithreading under
constant bounds with immediate suspend is nice", what would result under routine
that for example it's usually organized in for example "an overall adaptive resource",
all sorts what running emulation makes run inspecting code under running it, in
terms of binary objects then running them.
So, a terminal, then for a shell, would be usually enough for the filesystem,
what results "as a computer, it looks like a shell, ...". The framebuffer is usually
considered a single area, in what is for example full frame console emulation,
that must provide a shell console and terminal, then with a usual user's minimal
semantics, simplest tools laying around, and homes.
Here the console emulation itself is often where the framebuffer, under console-only,
writes console to the framebuffer, for example where otherwise "what is the framebuffer"
is a "virtual framebuffer" if it's made virtual, there's no patent reasons not to drive output
directly to RAM besides the view adapter.
So, this operating system should be along the lines of "the system has 4GB of RAM and
1GB Quad CPU and an entire Terabyte of disk", then to run "the system has 32 bit address
virtual RAM according to the runtime", 1 Megabyte of Disk, and Zero RAM.
Then for driving full HD and especially for split-screen HD, or 30/60 refresh, it's usually
to be expected that the drivers are in a DMA convention must be. There's also the disk
controller, far be it for me to queue the spool to the disk controller anyways, "putting
all the I/O through the disk controller".
(What results a disk controller.)
Here that's the idea, the I/O, to work the I/O, in terms of: basically cooperating with
the disk controller network I/O, and, the fact that disk "read-often" or "read-random",
makes "memory-mapped files" for the C runtime, disk controller cache and virtual memory.
So, the I/O, is broken down into protocols, the Internet protocols these days about
result to "compression", and, "encryption", and catalog, what result block coders
that all computer nework I/O code executes in their block coding, with "TLS session"
and "Deflate/Gzip un-encumbered compression", or data format.
Then protocols vary for example their routine all transfer, for making:
JPEG protocols <- there are them
"Internet protocols", ..., data transfer in data formats
This way there is for making organization, "here are your videos", "here is your code",
"this is your shell", ....
I'd love to think I could make a computer, that I could program by tapping at the shell.
Probably best start with "disk controller and DMA".