Yeah, but now think more like a physicist (I actually have little idea
how to do that):
We need snapshots of those 2^200 electrons from the point of the big
bang to the present in picoseconds. And then there are the "what-ifs"...
Not to mention their x,y,z position and various qualities like energy
level and spin...
You laugh? too bad, no?
-Barry Shein, Boston University
If there are only that many electrons in the whole universe, how
can you develop storage, using electrons, that is as large ?
Isn't 10^200 more like it? I didn't think those physicists used base two
very often!
If you were to build a memory THAT big, I guess you'd have one electron
in each cell, leaving the rest of the universe devoid of electrons, thus
blowing the universe up due to all of the exposed positive charges repelling
each other.
Whew, let's not think of that (im?)possibility!
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Kirk : Bones ? | Phil Mason, Astronautics Technical Center
Bones : He's dead Jim. | Madison, Wisconsin - "Eat Cheese or Die!"
- - - - - - - - - - - - - - - -|
...seismo-uwvax-astroatc!philm | I would really like to believe that my
...ihnp4-nicmad/ | employer shares all my opinions, but . . .
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Obviously it has to be virtual :-)
And on another note, I write...
>How long would it take you
> to do a MEMQ of a list of a few HUNDRED MILLION lisp objects long?
> etc.
In reply To which everyone introduces me to arrays, thank you. Fine, now
do an (ADD1) to a few hundred million using hash arrays or some other
random storage scheme and find much speed up (sheesh, it's amazing
how many people can't see the forest for the trees :-) This isn't a
pop-quiz, it's a discussion, give the other side a little slack on
the trivia and stick to the issues, it is interesting though.
-Barry Shein, Boston University
No, physicists don't usually think in binary, and no, I didn't mean
10^200th. A physicist would say there are about 10^40 electrons and a
computer scientist would call that about 2^200 because it's more useful
that way. And please, no nitpicking from people claiming these numbers are
off by an order of magnitude or two.
--
Roy Smith, {allegra,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016
A mathematically trained ex-psychologist says that 10^40 is
2^132, more or less. a^x = b^((x*log(a))/log(b)) - in this case
10^40 = 2^(40*1/.303) = 2^132 to three place precision.
I think correcting an error of over 20 orders of magnitude (200*.303 -40)
is not nitpicking, wouldn't you agree?
--
Jerry Natowitz (HASA - A division)
Bell Labs HR 2A-214
201-615-5178 (no CORNET)
ihnp4!houxm!hropus!jin
or ihnp4!opus!jin
Isn't it interesting how the beautiful little red flower in the forest
becomes so ugly when you discover it's a candy wrapper.
Try 10^40 electrons in 100 qubic kilometers of water. You can do the
math. Sorry to nitpick but I think you are only by a hunderd or two hunderd
orders of magnitude. You must think the universe is a very small place!
Wayne Knapp
Better fix my mistakes before I'm flamed:
Try 10^40 electrons in 100 cubic kilometers of water. You can do the
math. Sorry to nitpick but I think you are only off by a hunderd or two hunderd
I don't see any smiley faces so:
Let's use water. Water has an atomic weight of about 18, weighs about
1 gram per cubic centimeter, and has 10 electrons per molecule. Avagadros
number is 6.023E23.
In 1 cubic meter of water there are about 1E6 grams. At 1/18 moles/gram
one cubic meter of water has about 55556 moles and multiplying by
Avagadro's number yields 3.3E28 molecules or 3.3E29 electrons per cubic
meter of water.
That means that in 1 cubic mile of water there are about 1.38E39 electrons
and you only need about 7.2 cubic miles of water to get 10^40 electrons.
And that is pure water. Who knows just how many electrons there are in
a cubic mile of sea water. I think there are more than 7.2 cubic miles
of water on this earth which is just a very small fraction of the mass
of the sun which is a very small fraction of the mass of the galaxy which
is a very small fraction of the universe which we know about which MAY
well be just a very small fraction of all that exists if there is even any
limit on that! The point is that any universe with only 10^40 electrons
must exist in a Dr. Seus book (Horton Hears a Who I believe). Besides,
if we need more mass, we can take all the energy produced by the sun for
a year and convert it to water and have enough electrons to make lots
of memory. Maybe that is what caused the ice age. Some Net.arch fanatic
commandeered the sun for a century or two to get enough energy to make
enough mass to make a memory bank for his Cray 3E24 or whatever.
I bet he didn't use paging either :->. Probably didn't use unix!!
John Blankenagel
In article <6...@hropus.UUCP> j...@hropus.UUCP (Jerry Natowitz)
pointed out that 10^40 is off by rather a lot. My apologies. I used a
calculator program on Unix to calculate 2^200 and came up with 1.7e38.
Stupid me, by now I should recognize that number as something magic -- the
largest number you can represent in floating point on a Vax. It seems that
2^200 produces floating point overflow and the calculator program I used
doesn't catch overflows. And yes, I did think 1.7e38 was a bit low (I was
expecting more like 1e60, or about 3.something bits per decimal digit).
There is a discussion going on in RISKS-DIGEST (aka mod.risks) about people
trusting computers too much. Now I find myself just as guilty. Mea Culpa.
Anyway, I stand by my assertion that there are 2^200th electrons in
the universe. If you prefer to work in decimal, work out the conversion
yourself. For the purposes of this discussion, I consider any arithmetic
error in that conversion (even a factor of 10^20!) to be trivial.
A reasonable argument that 200 bits is not a good upper bound has
been made by various other people, however. There are often times when a
sparse address space is useful. If you take my home address as a number,
RoySmith222UnionStreetBrooklynNY11231, you need more than 200 bits (7-bit
ASCII). For a world data base, it might be convienent to use this directly
as an index into a table, and the hell with hashing.
I'm not going to nitpick your spelling because it's irrelevent to the discussion
Currently, cosmologists are still debating over whether the universe is open
or closed. There are quite good estimates on a LOWER bound for the mass of the
universe. If the mass is actually TWICE the current known lower bound then
the universe IS closed. The point of all this is that the known mass can be
used to get an estimate on the number of atoms in the universe. It is something
like 10^78 or 10^79. There are therefore at least that number of electrons.
Reference:
"Red Giants and White Dwarfs", can't remember author's name but it begins with a'J'. Possibly Jastrow?
This assumes that most of the mass in the universe is in the form of
ordinary matter -- protons, neutrons, and electrons. This assumption is
very much in question these days. Proposed alternatives include everything
from neutrinos to topological singularities left over from shortly after the
big bang.
The visible matter in the universe is about an order of magnitude less than
amount needed to close the universe. This is a better lower bound on the
number of electrons -- it seems unlikely that stars contain any significant
quantity of such exotic mass.
Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Multimate International 52 Oakland Ave North E. Hartford, CT 06108
Main memory will be as much high speed RAM as you can afford.
Virtual memory will be slightly slower speed but less expensive RAM.
You'll still be able to page - but paging will be done so fast that
you won't have time for all the complicated paging algorithms that
are used today.
Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms
In fact, the Cray 2 uses ordinary dynamic RAMs for main memory. These
are the same type of devices your IBM PC uses. Of course, Cray
interleaves them many ways to get the data bandwidth needed. But the
point is, main memory is already made out of the most cost effective
memory devices available. There is no "slower speed but less expensive
RAM". Note that I am talking about main memory, not cache, which is
made of expensive high speed devices. As cache is handled by hardware,
it is not relevant to a virtual memory discussion.
There was a claim that RAM is getting cheaper than disk. Assume 474Mb
eagle at $10,000. This yields $2E-5 per byte. Assume 256Kbit DRAM at
$2.56. (see 10/1/86 San Jose Mercury News Fry's Electronics ad) This
yields $8E-5 per byte.
Rotating machinery is still cheaper than silicon.
--
In Arizona they brag about how much water it takes to maintain their
lawns and golf courses. Can you say "aquifer overdraft"?
Phil Ngai +1 408 749 5720
UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
ARPA: amdcad!ph...@decwrl.dec.com
>There was a claim that RAM is getting cheaper than disk. Assume 474Mb
>eagle at $10,000. This yields $2E-5 per byte. Assume 256Kbit DRAM at
>$2.56. (see 10/1/86 San Jose Mercury News Fry's Electronics ad) This
>yields $8E-5 per byte.
>
>Rotating machinery is still cheaper than silicon.
And the advent of R/W optical disks will make this truer; even though you
can only write once, and access is basically sequential, a large part of
their use will be to replace disk rather than tape. For instance, I'm
now sitting on about 10Mb of data coughed up by molecular mechanics programs;
I have to look at this stuff again and again, reanalysing it in the light
of what I never thought of before, in order to make sense of it. My continually
changing analysis programs will continue to reside on our Eagle, but as soon
as we get our optical disk (hopefully this year) all this stuff can be moved
onto it. Our facility also does optical image processing, and the situation
is similar; we have a 36Mb image library on the Eagle. We also have groups
of people doing pattern-matching on protein and nucleic acid
sequences; the library for this info is also huge, and is on the Eagle.
In fact, the entire Brookhave National Lab's Protein Data Bank is something
we desperately need to have on line all the time, but can't because we don't
have room. We spend large amounts of time moving files in from tape and
deciding wwhich ones to delete. Thus we're going to have the distinction
between static and updatable on-line storage, and the static storage is going
to be cheaper than the dynamic. Also, since the cost of the medium itself is
negligible (as opposed to the cost of the machine that reads and writes on it),
static doesn't even have to be THAT static; all the manual pages can go on
it, for instance.
Peter S. Shenkin Columbia Univ. Biology Dept., NY, NY 10027
{philabs,rna}!cubsvax!peters cubsvax!pet...@columbia.ARPA
This discussion started on the virtues of virtual versus
non-virtual memory systems. The optical devices I've heard
of so far would make terrible paging devices. The disadvantages
of using a WORM optical disk as paging device are obvious. Even
proposed devices that could rewrite blocks, would not make good
paging devices because of the awful seek times and rather anemic
transfer rates associated with these devices. Has anyone seen
any optical disks with numbers in these categories comparable
to good magnetic disks?
That is not to say that a optical disk dosn't beat the Hell
out of having the data sitting in the tape library.
--
Joel Upchurch @ CONCURRENT Computer Corporation (A Perkin-Elmer Company)
Southern Development Center
2486 Sand Lake Road/ Orlando, Florida 32809/ (305)850-1031
{decvax!ucf-cs, ihnp4!pesnta, vax135!petsd, akgua!codas}!peora!joel
Part of the problem is the same thing that makes Mac floppies slow.
CD optical disks are specified to have a constant linear velocity of
material passing under the heads. This means that the disk must
spin faster and slower as the head seeks in and out. While they could
make the heads seek as fast as magnetic disks, they just can't spin
the disk twice as fast, or brake to half the speed, in a reasonable time.
I don't know if WORM disks are done this way.
Traditional magnetic disks are spec'd at a constant rotational velocity
(e.g. 3600 RPM), which makes the spinning and the seeking independent.
This causes the bits on the outside of the disk to be "wider" than the
bits on the inside, since more material zips under the head in the
same amount of time. Since it must be able to read and write bits
anywhere on the disk, it has to be good enough to do it on the inner
tracks where the bits are smaller. But when on the outer tracks, all
that precision goes to waste.
I don't understand why nobody has built magnetic disks that spin at
a constant speed, but vary the clocking of data to the disk so that
all the bits end up the same width on the media. This means that you
might get 30,000 bytes per track on the inside and 90,000 on the
outside -- but who cares? On a SCSI interface, the system doesn't know
where the tracks and cylinders are anyway.
I don't know how to figure it out exactly, but I suspect that this
simple change (to a disk and its controller) could double the amount of
stuff you could put on the same disk with the same heads and almost the
same electronics.
--
John Gilmore {sun,ptsfa,lll-crg,ihnp4}!hoptoad!gnu jgil...@lll-crg.arpa
terrorist, cryptography, DES, drugs, cipher, secret, decode, NSA, CIA, NRO.
The above is food for the NSA line eater. Add it to your .signature and
you too can help overflow the NSA's ability to scan all traffic going in or
out of the USA looking for "significant" words. (This is not a joke, sadly.)
This is an interesting idea to consider. One possible challenge is
that of extracting the clock from the bits stored from the media.
Usually the clock is encoded with the data and there is no separate
clock source. The more you constrain the frequency that the clock
might be at, the easier and faster it is to acquire the clock. If you
let it vary over a factor of three, it might be more difficult,
especially with effects like peak shift, wherein flux reversals
written on the disk tend to repel one another. Because of this, the
flux reversals appear displaced from where they were written. It's not
necessarily impossible, just more difficult. Perhaps the clock
separation circuit would then become the limit on seek time instead of
head settling time. Just some speculations on my part.
--
The VT220 keyboard is an <iS<o standard. That means the French can
hate it as well as the Americans.
<phil <ngai +1 408 749 5720
<u<uC<p: <[ucbvax,decwrl,ihnp4,allegra<]!amdcad!phil
AR<pA<; amdcad!ph...@decwrl.dec.com
They didn't carry it as far as possible, but Commodore varied the clock rates
on their 5-1/4" drives for the PETs and C-64s. The difference was, I believe,
between 17 sectors on the inner tracks and 21 on the outer ones (don't quote me
on those figures).
You could do the same thing on an optical disk, since data-storage applications
don't require constant bit-rates, the way real-time audio output does.
Is there an expert out there who could shed more light on the subject?
-Colin Plumb (ccp...@watnot.UUCP)
"You do have one slim chance for survival. This illness is so fatal it's been
known to kill itself by accident." -Sillier than Silly
Alternatively, just extract a synchronizing signal, and let the clock
frequency by determined by what track is being accessed. A simple
servo should be able to set the required clock frequency close enough,
but it would have to get in step.
--
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nat...@astro.AS.UTEXAS.EDU
>They didn't carry it as far as possible, but Commodore varied the clock rates
>on their 5-1/4" drives for the PETs and C-64s. The difference was, I believe,
>between 17 sectors on the inner tracks and 21 on the outer ones (don't quote me
>on those figures).
>
>You could do the same thing on an optical disk, since data-storage applications
>don't require constant bit-rates, the way real-time audio output does.
Real-time audio output probably doesn't require constant bit-rates either.
I believe that CD players read bits into a buffer asynchronously, and then
put the bits out to the D-to-A at a precisely clocked rate. At least that's
how I would do it. That would avoid the need for an extremely accurate drive
motor. Variable bit density on the disk would only affect the rate at which
program material enters the buffer. As other contributors to this discussion
have noted, variable bit rates would involve some computational overhead, and
it's not clear how much this would slow things down. In addition, it may be
that for audio CD's the nominal bit-rate on the rotating medium is closely
tuned to the bandwidth of the channel to the buffer, so that doubling the
bit-rate might require some redesign. For audio applications there's no
incentive for anyone to do this. (If you could double the amount of program
material on a CD, do you think people would be willing to pay $30 each instead
of $15? Most people think CD's are overpriced anyway.) For information
storage there probably would be; if for a relatively small additional cost an
optical drive could store 2Gbyte instead of 1Gbyte per platter, I'd buy that
machine.. For audio applications the bucks are in selling disks; for data
processing, the bucks are in selling machines.
I'm no expert in the field, but when I used to play with C-64's, I had a good
look at how the machine worked at the hardware level:
On the disk, a '1' was encoded as a change in the polarity of the magnetic field
on the disk. A '0' was encoded as no change.
To prevent long blank regions, nybbles were expressed in a 5-bit code which
ensured that there would never be more than 2 0's in a row.
When reading data, a bit clock would run at the expected data rate on the disk.
If a full cycle of the clock passed without a polarity change sensed by the
read head, a '0' was shifted into the serial-to-parallel converter.
If a polarity change *was* sensed, a '1' was shifted in, and the bit clock was
reset to sync up with the pulse, ensuring it would never drift far.
When the serial-to-parallel was full, the byte was latched into an I/O port,
and a signal was sent to the CPU to read the byte.
Before reading the track, the CPU would adjust the frequency divider used to
take the system clock down to the bit rate, to allow higher bit rates on
the outer tracks.
If the CPU in this example was a DMA controller, there would be _no_ processor
overhead needed to allow variable bit-rates, besides looking up the appropriate
divider ratio for the given track, which is absolutely trivial. Of course, on
a CD, where the bit rate changes continuously, you can just use a PLL instead of
a fixed-frequency clock. If this takes too long to sync up, you can prepare it
with an expected bit-rate from a variable oscillator while the head seeks.
(I don't know if this is actually feasable - it seems reasonable.)
I know that MFM recording (which is how most disk drives these days work) is
different, but it should give you some idea of how these gizmos operate.
------
About higher-capacity optical disks...
This may be just unsubstantiated rumour, but I heard that one of the design
goals was to fit Beethoven's 74-minute nth symphony on a disc. (n=5 or 9, I
forget which.) Thus only 44,000 +/- samples/second were taken, because they
couldn't fit more on a side. Trying to filter out the noise this creates above
the Nyquist limit (22,000 Hz) while letting through audio information all the
way up to 20,000 Hz gives CD-player designers some serious headaches. Filters
with cutoffs that steep induce serious phase distortion. If they could have
increased the information content without sending prices through the roof, I'm
sure the standards team would have done it.
------
-Colin Plumb (ccp...@watnot.UUCP)
"Bugs: This man page is confusing."
speed: 1000 rpm
discs: 16 (+ 2 for control)
read arms: each arm had 4 heads above and
4 below the platter, and could be moved to
64 positions, giving 256 tracks per surface
or 512 per disc.
blocking: The inner half of each
surface had 8 sectors per track and the outer
half had 16; each sector was 40 words and
each word was 48 bits. Total capacity was
therefore about 3.9 Mword.
Transfer rate was 10600 word/sec for the
outer zone and half that for the inner zone;
average latency was about 240ms.
It was a beast to program, too!
It's been done before. I don't think that it is quite done by changing the
clocking rate, but I do know of disk drives that vary the number of sectors
per track in order to fit the most data on the disk. I believe that the
lowly Commodore 64's disk drive is one such device.
>John Gilmore
\scott
--
Scott Hazen Mueller lll-crg!csustan!smdev
City of Turlock work: (209) 668-5590 -or- 5628
901 South Walnut Avenue home: (209) 527-1203
Turlock, CA 95380 <Insert pithy saying here...>
Am I missing something here?
Ken
It depends on the encoding scheme used. For single-density (FM-encoded)
diskettes, the clock pulses are interleaved with the data pulses so all
you have to do is sync on those and presto-bingo - no changes in clock
rate are required.
For double-density (MFM-encoded) diskettes, the clock pulses are derived
from an external oscillator which is usually hooked into the drive motor
somehow to generate a consistent clock pulse (this is typically done by
means of a phase-locked loop).
Most of the variable-rate drives with which I am familiar take a slightly
different tack by varying the speed of the drive motor to increase the
bit density on the outer tracks. The Victor 9000 was the first micro that
I am aware of which used this technique - they hooked up an Intel 8048 to
a bare drive motor and used it as the drive controller. It was smart enough
to compensate for differences between drive rotational speed (the 'C' in
CAV isn't always so from drive to drive) so you could move diskettes from
machine to machine without worrying about having the data get garbaged.
Variable-speed drives are obviously impractical for large hard disks because
of the inertial forces built up by the spinning platters. I would think
a variable clock rate would work for a disk with fixed media if the rotational
speed could be tightly controlled or compensated for. Removable media are
a different story, however, because the tiniest difference in rotational
speed would make disks unreadable by any drive besides the one that wrote it.
I would expect the hardware for such a system to be extremely complicated
and cost-prohibitive.
Jim Greenlee
--
The Shadow
Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!jkg
The problem is that it ISN'T a simple change. You have to design a circuit
which can handle extreme variations in input data at a fairly high frequency.
While circuits like this can be designed, they are neither simple nor cheap.
Illustrative example: assume an 8" disk, first track at 7", last at 3.5".
Obviously, the data on the 7" track will be recorded at twice the clock
frequency, since the track is exactly twice as long as the first. If we
assume 5MHz data rate on the 3.5" track, this is a 5MHz data rate at the
7" track; any circuit designed to handle both will be considerably more
complex than one designed only to handle one (and remember, the circuit
must also handle all in-between cases too!). Even more importantly, one of
the most critical of circuit elements in a disk controller is the timing
circuitry, which must be highly accurate in order to "catch" the bits
properly. Designing a circuit of sufficient accuracy which would operate
over an (at least) two-to-one range is VERY difficult.
--
----------------
"... if the church put in half the time on covetousness
Mike Farren that it does on lust, this would be a better world ..."
hoptoad!farren Garrison Keillor, "Lake Wobegon Days"
See "Constant-density recording comes alive with new chips", Mark
Young, _Electronic Design_, Nov 27, 1986.
It shows how to solve the variable clock problem. It also talks about
how it is important the disk you are using not have its own clock
separator, which leaves out ESDI and SMD, with only ST506 as a
acceptable commonly available drive type. Also, the drive must have a
read amplifier with enough bandwidth to support the higher bit rate.
Of course, if you wanted to invent your own disk this would not be a
problem. And the improvement gotten from constant-density recording is
on top of any other improvements you make, such as RLL (multiplies).
Yes, OS now has to deal with differing number of sectors per cylinder,
wonder what would this do to the Berkeley Fast File System? Anyway,
it's a good article and you should read it.