Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Was there ever an analog compact disc?

1,021 views
Skip to first unread message

Kevin Sutton

unread,
Mar 8, 1995, 8:44:00 PM3/8/95
to
A friend asked me the other night if anyone ever thought of
bringing out an analog compact disc, prior to the invention of
the digital version. That is, a CD using LaserDisc picture
type technology (FM modulation?) to store audio on a disc.

Does anyone know if this was ever attempted and, if so,
whether the system ever saw the light of day?

Thanks
Kevin

Murphy's First Law of Laboratory Practise:-
"Hot glassware looks just like cold glassware"

Kevin Sutton Internet : sut...@crop.cri.nz
New Zealand Institute for Crop & Food Research Limited
Private Bag 4704, Christchurch, NEW ZEALAND

Lynn Olson

unread,
Mar 12, 1995, 7:10:01 AM3/12/95
to
In article <3jm04q$3...@netnews.upenn.edu>, "Kevin Sutton"
<Sut...@crop.cri.nz> wrote:

> A friend asked me the other night if anyone ever thought of
> bringing out an analog compact disc, prior to the invention of
> the digital version. That is, a CD using LaserDisc picture
> type technology (FM modulation?) to store audio on a disc.
>
> Does anyone know if this was ever attempted and, if so,
> whether the system ever saw the light of day?

In a very indirect sense, the answer is "yes". These would be the
first-generation Laserdisks, which used separate Left and Right FM
carriers for the movie soundtrack. The FM soundtracks were submitted to
the further indignity of the CX compansion system (the CBS Labs clone of
the Dolby System), which increased the S/N from 65dB to 85dB. Aside from
specs, though, the real reason to use the CX compansion was to reduce sync
buzz leaking onto the audio tracks (a similar technique is used to reduce
sync buzz for Beta and VHS Hi-Fi).

These two discrete audio FM tracks remain on contemporary Laserdisks,
sandwiched between the 44.1kHz/16-bit digital info down in the baseband
(in the same frequency band as conventional CD's) and the approximately
15Mhz FM carrier for the NTSC or PAL-encoded video. As an aside, part of
the reason for the visual superiority of Laserdisks to VHS or Beta is that
the color-difference signals remains in NTSC or PAL form, compared to the
low-resolution color-under technique used in home videotape. This is only
possible due to the very wide bandwidth of Laserdisks.

Hmmm, along those lines, anyone for *uncompressed* 96kHz/20-bit discrete
5-channel audio using Laserdisks as a medium? Just think, the 12" album
could make a comeback!

--
Lynn Olson, Senior Editor, Positive Feedback Magazine,
a publication of the nonprofit Oregon Triode Society.
Call or fax (503) 234-4155 for more info.

GMGraves

unread,
Mar 14, 1995, 1:06:55 PM3/14/95
to
Yeah, Pioneer "fiddled" with it in the late 'seventies. I heard it at an
Audio Engineering Society Convention in Los Angeles, California, in either
'78 or '79. It was ok. Now, getting a bit theoretical now, I can PROBABLY
put my finger on why it never flew as a product.
1) In those days, everybody (especially the Japanese) thought that digital
was the wave of the future. People believed that digital sound would give
them "perfect sound forever" (oh, do we know better!).
2) The next year, Pioneer showed some 12 " laser discs that were digital
audio only. The decoder hooked to the video output of the player and
provided left and right stereo audio outputs.
3) The only way (without reworking the video disc system) to get analog
audio from a laserdisc is to use the AFM method that has always been used
for the analog audio tracks on laser discs. This system is SIMILAR to Beta
Hi-Fi or VHS Hi-Fi, and uses CX companding noise reduction. I have a
couple of music video laserdiscs (classical) and have tried just listening
to the music with the picture turned off. It is nowhere near as good as a
well recorded LP, scratches and all! You can make the same determination.
If you don't have an LV player, copy a good sounding LP to your Hi-Fi VCR
and then listen to it played back- yecchhh! It might be fine for movie
soundtracks, but high-end audio it ain't!!

John DeGroof

unread,
Mar 14, 1995, 4:57:42 AM3/14/95
to
In <3k1qf3$2...@geraldo.cc.utexas.edu> ly...@teleport.com (Lynn Olson)
writes:

>Hmmm, along those lines, anyone for *uncompressed* 96kHz/20-bit discrete
>5-channel audio using Laserdisks as a medium? Just think, the 12" album
>could make a comeback!

I have seen a certain military acoustic test. It involves a shiny
metallic looking sheet that's actually holographic. The sheet was about
4" wide and 24" long. The stripe of data was 1/4" wide and 20" long.
Three full length albums were recorded on that stripe, with a sampling
rate of 116KHz. The sheet of data was loaded into the computer in
seconds. I wish I could hear my system hooked up to this thing.

To get to the point, I would like for holographic technology to hurry up
and replace the CD so we could take advantage of the higher data
capacity, which would mean more minutes and higher sampling rate. A
laserdisc type format like you mentioned above would be a step forward in
one respect, and a step backward in another.

Don't you wish they'd just let us vote on new formats?

--
John G. DeGroof
jdeg...@ix.netcom.com
3084...@ucsbuxa.ucsb.edu

jj, curmudgeon and all-around grouch

unread,
Mar 14, 1995, 8:59:37 PM3/14/95
to
In article <3k4miq$2...@netnews.upenn.edu> jdeg...@ix.netcom.com (John DeGroof) writes:
>To get to the point, I would like for holographic technology to hurry up
>and replace the CD so we could take advantage of the higher data
>capacity, which would mean more minutes and higher sampling rate.

Also to the point, just what sampling rates would you argue
were necessary, what b\number of bits of resolution in the PCM
domain, and what time length for the CD?

It's probably well within possiblity, using some compression,
on a standard CD, of course, you probably won't like the compression,
even if you can't hear it, but that's another problem.
--
Copyright alice!jj 1995, all rights reserved, except transmission by USENET and like facilities granted. Said permission is granted only for complete copies that include this notice. Use on pay-for-read services or non-electronic media specifically disallowed. -------
And God in his heaven has decided to keep mum, cause He's just another traveller on the road to kingdom come.

-----

John DeGroof

unread,
Mar 15, 1995, 4:11:27 AM3/15/95
to
In <3k5i92$5...@agate.berkeley.edu> j...@research.att.com (jj, curmudgeon and
all-around grouch) writes:

>Also to the point, just what sampling rates would you argue
>were necessary, what b\number of bits of resolution in the PCM
>domain, and what time length for the CD?

Ok, here's my opinion... As time goes on, computers are getting faster and
faster. We are also coming up with formats of data storage that hold more
and more bytes. I would like to see a sampling rate as high as possible to
fill the data storage format used, and a computer that could crunch all that
data. The higher the sampling rate, the closer to the original analog
waveform. I suppose it will get to an undetectable degree, but for now 44.1
is nowhere near enough. I would like to see a 96KHz sampling rate standard
in consumer products.

As for bits of resolution, more bits means more amplitudes, therefore more
resolution. Again, I would like to see a standard of 64 bits in the consumer
products. The current 16 bit standard is obviously not enough. Again, this
would fill more data, and require a faster processor. There would also be a
point where more bits wouldn't be detectable. As for D/A converters, I like
what I've seen/heard with 1 bit technology (when used properly).

As for length for the CD, I think for most albums, the current limit is the
limit for most people to listen to one artist. It would be nice to have more
than the current 74 minute standard so the artist could make the decision of
how long his/her album should be. This, in my opinion, adds to the artistic
quality of music.

>It's probably well within possiblity, using some compression,
>on a standard CD, of course, you probably won't like the compression,
>even if you can't hear it, but that's another problem.

You're right, I don't like data compression. I don't have much experience
with digital audio data compression, and don't have much interest because of
my extensive background with video compression. No matter what the format,
audio or video, all compression routines lose data to SOME degree. I can see
the difference in targa 24 bit pictures, despite the claim there is no
picture loss. In the end, audio and video are both digital information, and
will suffer somewhat by compression routines.

John Kodis

unread,
Mar 16, 1995, 11:22:29 AM3/16/95
to
John DeGroof <jdeg...@ix.netcom.com> wrote:

>As for bits of resolution, more bits means more amplitudes, therefore more
>resolution. Again, I would like to see a standard of 64 bits in the consumer
>products. The current 16 bit standard is obviously not enough.

Well, hey... I like massive overkill as much as the next guy, but a
quick calculation indicates that 64 bit samples would give an SNR of
around 385 dB. Isn't this perhaps a little excessive, even for the
high-end audio crowd?

-- John.

Gabe Wiener

unread,
Mar 16, 1995, 11:46:20 AM3/16/95
to
In article <3k7hcb$q...@netnews.upenn.edu>,

John DeGroof <jdeg...@ix.netcom.com> wrote:
>The higher the sampling rate, the closer to the original analog
>waveform.

Where do you get this from? What sorts of bandwidths are you
presupposing are being delivered to the recorder? Or to your ears?
No analog waveform ever produced as a mainstream distribution format
has *ever* had a bandwidth as wide as the current CD format.

If you wish to argue about *acoustic* waveforms and how ultrasonics
might have some ameliorative effect (a very risky statement), that's
another argument entirely, and one which you should craft very
carefully before posting it. But to suggest that consumer audio
has ever seen analog signals with a bandwidth wider than current
CDs is ludicrous.

>I suppose it will get to an undetectable degree, but for now 44.1
>is nowhere near enough.

Enough for what? It is certainly enough to represent the spectrum of
human hearing, particularly that of most human beings over eighteen.
Dogs are, of course, a different story.

>I would like to see a 96KHz sampling rate standard
>in consumer products.

Gee, how'd you arrive at this number? Why 96? Why not 88 or 176?

>As for bits of resolution, more bits means more amplitudes,

No, more bits mean more dynamic range. You can have a great deal of
amplitude with only one bit. You just have pretty coarse step sizes.

>Again, I would like to see a standard of 64 bits in the consumer
>products.

You'd like *64* bits, eh? Well, let us do a little arithmetic, shall
we? 64 bits would give you a dynamic range of, oh, something on the
order of 384 dB. Hmmmm. Let us look at this another way. Let us
assume that we use 64 bits as headroom for a moment, meaning we get 48
bits above our current digital full scale. So, let us assume a
calibrated signal where current digital full scale 16 dBv, a fairly
conservative figure. That would mean that your system would provide
for 288 dB of voltage beyond that. In other words, 304 dBv. This
would mean that your CD player would have to put out maximum voltages
in excess of 1.228 Exavolts, or 1,228,000,000,000,000 volts at full
scale. You will pick up the electric bill?

Or, if we actually utilize that extra 64 bits for resolution, we can
forget about millivolt or microvolt steps. Don't even think about
nanovolts of picovolts. Your 64-bit system would have *femtovolt*
resolution. Just what we always wanted....CD players that can
quantize on the electron-volt scale.

>The current 16 bit standard is obviously not enough.

Actually you are right. It is not enough. But why is it obvious?
Defend your position.

>There would also be a
>point where more bits wouldn't be detectable.

And, in most American homes, the current 16 bits doesn't even get utilized.

My advice to you, my good fellow, is to learn the terminology before you
throw around numbers that you demonstrably do not understand.

--
Gabe Wiener -- ga...@panix.com -- Director | "I am terrified at the thought that
Quintessential Sound, Inc. (212) 586-4200 | so much hideous and bad music may
Recording / Mastering / Restoration | be put on records forever."
PGM Early Music Recordings (800) 997-1750 | --Sir Arthur Sullivan

Steven Abrams

unread,
Mar 16, 1995, 11:25:00 PM3/16/95
to
In article <3k7hcb$q...@netnews.upenn.edu> jdeg...@ix.netcom.com (John
DeGroof) writes:

> You're right, I don't like data compression. I don't have much experience
> with digital audio data compression, and don't have much interest because of
> my extensive background with video compression. No matter what the format,
> audio or video, all compression routines lose data to SOME degree. I can see
> the difference in targa 24 bit pictures, despite the claim there is no
> picture loss. In the end, audio and video are both digital information, and
> will suffer somewhat by compression routines.

All compression routines lose data to some degree? Gee. I hope my
software will work after I PKZIP, compress, gzip, zoo, or otherwise
compress it and then expand it. If even one bit is off, my binaries
won't run right!

I don't have the Targa spec off-hand, but I do know that there are
image formats that have purely lossless compression, i.e. GIFs. That
is, the data after expansion is identical to the data before
expansion.

If you claim that you can *see* the difference between an image that
has undergone lossless compression and one which has not, you are
imagining things. Your frame buffer really doesn't care if the data
were compressed, or stored on punch-cards before being loaded.

~~~Steve
--
/*************************************************
*
*Steven Abrams abr...@cs.columbia.edu
*
**************************************************/
INFORMATION SUPERHIGHWAY = Interactive Network For Organizing,
Retrieving, Manipulating, Accessing, and Transferring Information On
National Systems, Unleashing Practically Every Rebellious Human
Intelligence, Gratifying Hackers, Wiseacres, And Yahoos. -- Kevin Kwaku.

Howard Christeller

unread,
Mar 17, 1995, 8:02:05 PM3/17/95
to
In article <3k7hcb$q...@netnews.upenn.edu>,
jdeg...@ix.netcom.com (John DeGroof) wrote:
[deleted]
> ... The higher the sampling rate, the closer to the original analog
> waveform. I suppose it will get to an undetectable degree, but for now 44.1
> is nowhere near enough. I would like to see a 96KHz sampling rate standard
> in consumer products.
[deleted]
If the analog signal is band-limited to 20 KHz, 44.1 KHz sampling is
enough. Without proof that the transducers on each end of the audio
chain, much less our ears, can do more than 20 KHz, then there is NO
need for higher sampling rates.

>
> As for bits of resolution, more bits means more amplitudes, therefore more
> resolution. Again, I would like to see a standard of 64 bits in the consumer
> products. The current 16 bit standard is obviously not enough. ...

I doubt that many people can achieve 96 dB S/N ratio acoustically in
a home environment. 16 bits is pretty good for a consumer format.
A few more bits might help, and are surely useful in the recording
studio, but 24 bits gives you 144 dB. 64 bits is absurd - 384 dB!

Stereophile reviewed the Audio Research VT-150 and called it
"...closer to the live musical experience than I've heard from any
amplifier." The manufacturer rates its noise at 98 dB below
maximum output, weighted. Get the idea yet?

I think that you are blaming the standards instead of the poor
implementations that we have all heard. Only recently have CD
players started to approach the limits imposed by the standards.

(Warning! Argument by assertion & speculation follows!)

In my opinion, it is the analog portions of today's digital audio
systems which are the weakest link. Why else the interest in
replacing opamps or improving power supply regulation? Moving more
bits will not improve the sound if we are limited by the noise and
distortion of the analog stages. How about fixing the part that IS
broken, instead of the part that works?

--
Howard Christeller how...@kaiwan.com
Irvine, CA

Bob Myers

unread,
Mar 18, 1995, 2:37:30 PM3/18/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:
> Ok, here's my opinion... As time goes on, computers are getting faster and
> faster. We are also coming up with formats of data storage that hold more
> and more bytes. I would like to see a sampling rate as high as possible to
> fill the data storage format used, and a computer that could crunch all that
> data. The higher the sampling rate, the closer to the original analog
> waveform. I suppose it will get to an undetectable degree, but for now 44.1

The argument that a higher sampling rate is needed to get "closer to the
original analog waveform" doesn't hold up unless you believe that the
signal components above 22 kHz are important to the sound. Below that
frequency, a 44 kHz rate is adequate to recover the original waveform
PERFECTLY, at least to the resolution offered by the number of bits
per sample.

> As for bits of resolution, more bits means more amplitudes, therefore more
> resolution. Again, I would like to see a standard of 64 bits in the consumer
> products. The current 16 bit standard is obviously not enough. Again, this

16 bits provides the oft-quoted dynamic range of about 96 dB. Do you
believe that more is really required for music? Why, and specifically what
reasons would you give for a particular number of bits/sample. The notion
of 64 bits per sample indicates that you are not familiar with sampling
theory or the behavior and limitations of noise in band-limited systems.
64-bit sampling would be impossible to achieve at even a 44 kHz sampling
rate unless you're going to do some really remarkable things in the
design, let alone the higher rates you are apparently proposing. Here's
a clue: what do you expect to be the inherent level of thermal noise in
any piece of the playback chain, including the DAC, and how does that
level compare to the value of the LSB in a supposed 64 bit system?

> You're right, I don't like data compression. I don't have much experience
> with digital audio data compression, and don't have much interest because of
> my extensive background with video compression. No matter what the format,
> audio or video, all compression routines lose data to SOME degree. I can see

It is a vast overstatement to say that ALL compression routines are lossy.
There are quite a number of lossless compression schemes available for
video, audio, and all sorts of other data; the effectiveness of such
schemes (in terms of the compression ratio) depends on the redundancy
present in the original data. You don't HAVE to lose data - you DO
have to give up redundancy, which means that the resulting signal is
more sensitive to noise in the channel. But this just means that if
you have a channel which is sufficiently quiet that the original level
of redundancy isn't required, you CAN get significant compression via a
scheme which IS lossless. I'm surprised that a person claiming an
"extensive background with video compression" has never heard of simple
run-length encoding, which is about the simplest such scheme that comes
to mind.

Bob Myers | my...@fc.hp.com
Senior Engineer, Displays & Human Interface | Note: The opinions presented
Workstation Systems Division | here are not those of my employer
Hewlett-Packard Co., Ft. Collins, CO | or of any rational person.

Richard D Pierce

unread,
Mar 18, 1995, 5:42:08 PM3/18/95
to
>John DeGroof (jdeg...@ix.netcom.com) wrote:
>> As for bits of resolution, more bits means more amplitudes, therefore more
>> resolution. Again, I would like to see a standard of 64 bits in the
>> consumer products. The current 16 bit standard is obviously not enough.

No, John, more bits mean more dynamic range. For a given maximum output
level, it means more resolution relative to THAT level, simply because it
means a lower noise floor. For a given noise floor, it means a higher
maximum output level.

Your 64 bits figure exceeds, by a wide margin, the dynamic range of ANY
acoustic phenomenon known. If, for example, you set your 0 reference
level to be, say, 20 dB BELOW the threshold of hearing (where you will be
encoding the thermal collisions between air molecules with a good degree
of "noise-to-noise ratio", that puts the loudest recordable signal at 365
dB SPL.

That figure would require your speakers to be able to produce an output
of 5.8 TRILLION ACOUSTIC WATTS!!!. Now, even taking the OPTIMISTIC
estimate of 25% efficiency for Klipschhorns, that requires an amplifier
capable of producing 23 TRILLION watts per channel. Assume class AB
operation, at 40% efficiency, that requires the amplifier to draw nearly
120 TRILLION watts of electricity from your wall socket.

This little excercise is watt is refered to as a "reductio ad absurdum."
Take the permise, and take it to its locgical conclusion, and examine the
results. Are the results reasonable or absurd? It's a purely objective
process, no personal slam intended.

But the implications of your premise don't quite meet the criteria of
"reasonable," thus, they must be, ...

Bob Myers <my...@hpfcla.fc.hp.com> wrote:
>16 bits provides the oft-quoted dynamic range of about 96 dB. Do you
>believe that more is really required for music? Why, and specifically what
>reasons would you give for a particular number of bits/sample. The notion
>of 64 bits per sample indicates that you are not familiar with sampling
>theory or the behavior and limitations of noise in band-limited systems.
>64-bit sampling would be impossible to achieve at even a 44 kHz sampling
>rate unless you're going to do some really remarkable things in the
>design, let alone the higher rates you are apparently proposing.

The oft-quoted "ideal" sampling rate of 96 kHz combined with your desire
for 64 bits per sample combind with the need for stereo combined with the
link protocol overhead (we'll assume 64 bits per sample period for better
error correction, for a total of 192 bits per sample period) combined with
standard bi-phase encoding give us a combined communications channel bit
rate of about 40 Mhz. If you buy the jitter premise of many, how are they
going to react when you've added an order of magnitude to their problem?

>Here's
>a clue: what do you expect to be the inherent level of thermal noise in
>any piece of the playback chain, including the DAC, and how does that
>level compare to the value of the LSB in a supposed 64 bit system?

More to the point, assuming an ABSOLUTE level limit of 140 dB, above the
threshold of prompt hearing damage, you're talkgin about attempting to
record acoustic phenomenon to levels 245 dB BELOW the threshold of
hearing. The random thermal motion of air molecules provides an absolute
noise floor below which anything is completely ambiguated and lost. It is
THAT that provides the PHYSICAL limit to resolution, not some hokum about
bits. Now, add the noise in a quiet listening environment (20 dBA SPL),
you've reduced your REAL ENCODABLE deynamic range to 120 dB. That's 20
bits worth. Now, let's assume a REAL limit to output level of 120 dB SPL,
leaving a dynamic range of 100 dB. That's encodable by 17 bits. The mere
presence of NOISE makes any further resolution impossible. Plain and simple.

>
>> You're right, I don't like data compression. I don't have much experience
>> with digital audio data compression, and don't have much interest because of
>> my extensive background with video compression. No matter what the format,
>> audio or video, all compression routines lose data to SOME degree.

ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
FORMAT?

Care to back that up with hard evidence? For example, you are asserting
that if I take an audio file (the actual sample length and sample rate is
completely up to you) and compress it with PKZIP, (maybe getting 20%
compression rates), then uncompress it with PKUNZIP, then do a bit-by-bit
comparison of the two files, that I MUST find AT LEAST ONE BIT different?

Care to cover a bet that your dead wrong with real money?

Now, to the converse, name me a single analog storage format THAT DOES
NOT COMPRESS AT ALL, that's completely lossless in generational copies.
Be prepared to provide evidence supporting the assertion, if you care to
make it (which you may not, that's okay).

--
| Dick Pierce |
| Loudspeaker and Software Consulting |
| 17 Sartelle Street Pepperell, MA 01463 |
| (508) 433-9183 (Voice and FAX) |

Elton Toma

unread,
Mar 19, 1995, 9:48:40 AM3/19/95
to
On 18 Mar 1995, Bob Myers wrote:

> John DeGroof (jdeg...@ix.netcom.com) wrote:
.......JGs quote removed...........

> The argument that a higher sampling rate is needed to get "closer to the
> original analog waveform" doesn't hold up unless you believe that the
> signal components above 22 kHz are important to the sound. Below that
> frequency, a 44 kHz rate is adequate to recover the original waveform
> PERFECTLY, at least to the resolution offered by the number of bits
> per sample.

Agreed, for playback at least....

There is one benefit to sampling higher than 44 kHz. You have
a higher Nyquist frequency, so you don't need filters which cut off as
sharply as those required to record with a 22 kHz Nyquist frequency. Or,
with the same filters you can move the cut-off point up.
Yes, 22 kHz is high enough, but a greater margin for error (say a 30 kHz
Nyquist frequency) would make aliasing easier to deal with. I realize that
the acoustic signal being recorded may not contain much information above
20 Khz, but any high frequency noise introduced into the system *after*
the anti-aliasing filter will alias down into the audio range, and a
higher smapling rate should make this less likely to happen, although
noise above the Nyquist frequency will always be alised down if it is
not filtered out (included RF noise). The highest possible rate should be
used for recording, but that doesn't make it necessary for playback.

Elton Toma

John DeGroof

unread,
Mar 19, 1995, 7:52:36 PM3/19/95
to
In <3kb5oc$p...@netnews.upenn.edu> abr...@cs.columbia.edu (Steven Abrams)
writes:

>All compression routines lose data to some degree? Gee. I hope my
>software will work after I PKZIP, compress, gzip, zoo, or otherwise
>compress it and then expand it. If even one bit is off, my binaries
>won't run right!

It will work the same in that respect, but the file is NOT the same.
IF you doubt this, look at the file with a text editor before you
compress it, then compress and uncompress using the various
compression methods listed above and you will find they are different.
Some files zipped and unzipped are larger than the original!

>I don't have the Targa spec off-hand, but I do know that there are
>image formats that have purely lossless compression, i.e. GIFs. That
>is, the data after expansion is identical to the data before
>expansion.

There may be, but I haven't seen one yet, and I've looked.

>If you claim that you can *see* the difference between an image that
>has undergone lossless compression and one which has not, you are
>imagining things. Your frame buffer really doesn't care if the data
>were compressed, or stored on punch-cards before being loaded.

Take any high resolution graphics file, use something like JPEG, then
view the file again only this time zoom in (320x200) and look at the
pixels. You will see the difference in the before vs the after.

James W. Durkin

unread,
Mar 19, 1995, 8:54:00 PM3/19/95
to
In article <3kijkk$7...@eyrie.graphics.cornell.edu> jdeg...@ix.netcom.com (John DeGroof) writes:

In <3kb5oc$p...@netnews.upenn.edu> abr...@cs.columbia.edu (Steven Abrams)
writes:

>All compression routines lose data to some degree? Gee. I hope my
>software will work after I PKZIP, compress, gzip, zoo, or otherwise
>compress it and then expand it. If even one bit is off, my binaries
>won't run right!

It will work the same in that respect, but the file is NOT the same.
IF you doubt this, look at the file with a text editor before you
compress it, then compress and uncompress using the various
compression methods listed above and you will find they are different.
Some files zipped and unzipped are larger than the original!

There are 'lossless' and 'lossy' compression schemes. In 'lossless'
schemes, the data going into the compression routine is EXACTLY the
same as the data that comes out of the decompression routine. If it
is not, then either the implementer of the algorithm made a mistake
or the data was corrupted while in its intermediate, compressed, form.
'compress' and 'gzip' are both LOSSLESS compression, implementing the
LZW and LZ compression algorithms, respectively. Running data through
these algorithms DOES NOT RESULT IN DATA LOSS. Your claims to the
contrary are WRONG.

>I don't have the Targa spec off-hand, but I do know that there are
>image formats that have purely lossless compression, i.e. GIFs. That
>is, the data after expansion is identical to the data before
>expansion.

There may be, but I haven't seen one yet, and I've looked.

I think you need to look a little deeper. One of the oldest image data
compression techniques that I am aware of is 'run length encoding'.
This is a 100% LOSSLESS technique. If you're looking for a concrete
implementation, please examine the Utah Raster Toolkit's RLE image
format. Again, I think you need to do a little more homework before
making such demonstrably false claims.

>If you claim that you can *see* the difference between an image that
>has undergone lossless compression and one which has not, you are
>imagining things. Your frame buffer really doesn't care if the data
>were compressed, or stored on punch-cards before being loaded.

Take any high resolution graphics file, use something like JPEG, then
view the file again only this time zoom in (320x200) and look at the
pixels. You will see the difference in the before vs the after.

Sigh! JPEG is a LOSSY compression scheme. If you compress data with a
lossy scheme, then there will be data loss. This detectability of that
data loss varies from scheme to scheme. High-quality JPEG images (i.e.,
those with the 'quality factor' turned way up) can look very, very
close to the original image. That, however, isn't the point. When
you're dealing with LOSSLESS image compression techniques, there is no
difference (in case you missed that, NO DIFFERENCE) whatsoever between
the original image and the final image after decompression. Remember,
your original claim amounted to the statement that ALL data compression
schemes are lossy. That is, quite clearly, wrong.

Please, Mr. DeGroof, before you go passing off such absolutely false
information, GET YOUR FACTS STRAIGHT. It is one thing to suggest ideas
and explanations, which upon closer examination prove to be untrue. It
is entirely another matter to repeatedly present information as the
absolute truth, when that information isn't even close to being accurate.
Coming up with bad ideas is one thing, selling snake oil is something else
entirely. Please don't sell snake oil.

[[ James W. Durkin -- j...@graphics.cornell.edu ]]
[[ Program of Computer Graphics -- Cornell University ]]

John DeGroof

unread,
Mar 19, 1995, 9:23:03 PM3/19/95
to
In <3kg20i$6...@netnews.upenn.edu> DPi...@world.std.com (Richard D
Pierce) writes:

So what you're saying is that the space between the steps of resolution
with 16 bits is the same as the space between the steps of resolution
with 64 bits, only because there are more bits, there are more steps?
(which also gives us the huge dynamic range). I'm using the steps
analogy because it's the clearest one I could come up with for now.

Lets say there was a machine capable of 64 bit samples. If I'm
recording close to 0 level (highest possible in digital), I'm using
almost all of the 64 bits, and therefore have more resolution, and a
greater number of volume steps within a specified dynamic range, or do
you disagree?

Also, 64 bits of resolution does give a rather absurdly large amount of
dynamic range. If the statement in the first paragraph is true, how
much dynamic range do you feel we need, and how many bits would it take
to achieve this? We're testing a new digital multitrack recorder (that
hasn't been released yet), and the trend seems to be towards more bits.
The machine I'm referring to is a Studer 48 track digital, which can
use 24 bit samples. If you're saying 17 bits is enough, why are the
new machines 24 bits? How much dynamic range does this equate to?

There have been two responses on the need (or lack of) for more bits in
digital audio, but no comments on the need for a higher sampling rate,
except by those that have never seen an actual digital waveform to know
that they don't look like pure sine waves. Does this mean everyone
agrees we need a higher sampling rate? I sure hope so.

>ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
>FORMAT? Care to back that up with hard evidence? For example, you are
>asserting that if I take an audio file (the actual sample length and
>sample rate is completely up to you) and compress it with PKZIP,

I originally said I don't have much experience with audio data
compression methods and don't trust them because of my experience with
video compression, therefore I wouldn't have any "hard evidence" to
show you. It's a matter of faith, or lack thereof in my case.

GMGraves

unread,
Mar 19, 1995, 9:24:50 PM3/19/95
to
Let's see: 64 bits, 96 Khz sounds nice. Unfortunately it is not practical
or necessary.
More bits are not necessarrily what is needed here, although 24 would
improve things at the bottom of the loudness scale IF we stay with linear
quantization. But we need to change from linear quantization to
logrithmic. When CD was set as a standard, digital technology was fairly
primitive compared to what it is now. A logrithmic DAC wasjust not
practical so the "authors" of CD (Phillips/Sony) decided to go with linear
quantization. The problem with this is that sound is a logrithmic function
and they are trying to quantify it linearily. No wonder it sounds
dreadful. For example, the only time that all 16 bits are working in
today's system, is when the sound level is at maximum permissable
recording level. If you drop the recording level by 3dB, you now are
utilizing only 8 bits which can quantify only 256 different amplitude
levels. Drop another 3dB, and you now have a 4 bit system which can
quantify only 16 voltage levels, and this is for music that is only
one-fourth full volume! 4 bits also gives a s/n ratio of only 24 db - far
worse than any but a damaged LP. The fact is the lower the record level
with digital, the higher the distortion and the poorer the s/n ratio! A
logrithmic encoding system would give you 12 bit resolution at -3db level
and 9 bits at -6dB (assuming we stuck with 16 bits overall).
With our present system, its no wonder that "golden ears" think that CDs
sound dreadful- and they are right. With the poor resolution of linear
quantization, noise and distortion can not help but obscure subtle musical
detail such as hall ambience etc.

George graves

Mark Brindle

unread,
Mar 19, 1995, 10:50:05 PM3/19/95
to
Gabe Wiener (ga...@panix.com) wrote:

: Your 64-bit system would have *femtovolt* resolution.

Obviously, the correct way to set-up a 64-bit A/D is to use the
lower 185 dB of dynamic range to record everything down through
atomic collisions. Save the upper 200 dB for extra headroom...

...just in case there's another Big Bang,

Mark

Adour Vahe Kabakian

unread,
Mar 20, 1995, 1:50:25 AM3/20/95
to
In article <3kip1i$7...@eyrie.graphics.cornell.edu>,
GMGraves <gmgr...@aol.com> wrote:

> No wonder it sounds dreadful. For example, the only time that all 16
> bits are working in today's system, is when the sound level is at
> maximum permissable recording level. If you drop the recording level
> by 3dB, you now are utilizing only 8 bits which can quantify only
> 256 different amplitude levels. Drop another 3dB, and you now have a
> 4 bit system which can quantify only 16 voltage levels,

Let's take a closer look at what you are saying:

Level (dB) = 20*log(#bits)

20*log(2^16) = 96.3 dB
20*log(2^15) = 90.3 dB

We dropped by 6dB, yet we still have 2^15 = 32768 levels
available.

It seems you have used the rule of thumb of -3 dB for each
halving of level (or -6 dB depending whether we are talking about
amplitude or intensity).

Your mistake is very clear. You assumed that to get half the
level corresponding to 16 bits you need 8 bits! But,

2^16 / 2 = 2^15 -> 15 bits

We all have made our share of mistakes in trivial algebra.
However, the implications of your analysis mentioned in the rest of
your post are so ludicrous that they should have strongly hinted you
there was something seriously wrong with your numbers.

Come on, it is obvious that we don't hear only a dozen or so
amplitude levels from CDs or that the S/N ratio is around 24 dB!
When your conclusions are in clear contradiction with the obvious,
you should step back and ponder on what you have done wrong.

-adour

Richard D Pierce

unread,
Mar 20, 1995, 7:43:36 AM3/20/95
to
In article <3kjacp$6...@netnews.upenn.edu>,

But, Elton, this is the whole point to the process of oversamppling. No
disagreement whatsoever that attempting to design and construct anti
aliasing and anti-imaging filters (especially the latter implemented at
22 kHz in analog in first-generation CD players) is a dicey task at best,
and often leads to rather bizzare effects.

That's why NODBODY does it any more. The whole point in oversampling is
to move the aliasing and imaging problem FAR above 22 kHz, where, now,
it's a far easier task to deal with.

For example, the A/D converter on the pro workstation I work on runs a
64x oversampling: that means that it's sampling at 2 MHz or 2.8 MHz. The
entire anti-aliasing can be done in the digital domain with NO phase
consequences in the passband. Then, the data is simply desimated to
derive the actual sample rate of 32 or 44.1 kHz, as the user deems fit.

On playback, the D/A converters run at 16x oversampling: that's 512 kHz
or 706 kHz. The actual analog portion of the anti-imaging filter is the
simplest imaginable, linear phase, low order, while the actual imaging
filtering above 22 kHz again is done in the digital domain with none of
the awful phase consequences of analog filters. Further, the consequences
of sampling quantization are no longer spread out over the range of 0 to
16 or 0 to 22 kHz, but the SAME noise is now distributed over the range
of 0 to 256 kHz or 0 to 353 kHz. Even if we assumed a random
distribution, that means that the quantization noise IN THE PASSBAND is
now 1/16th what it would have been in the non-oversampled case.

Yes, the problems you raise are real. And they were solved quite some
time ago.

Richard D Pierce

unread,
Mar 20, 1995, 9:38:04 AM3/20/95
to
In article <3kijkk$7...@eyrie.graphics.cornell.edu>,

John DeGroof <jdeg...@ix.netcom.com> wrote:
>In <3kb5oc$p...@netnews.upenn.edu> abr...@cs.columbia.edu (Steven Abrams)
>writes:
>
>>All compression routines lose data to some degree? Gee. I hope my
>>software will work after I PKZIP, compress, gzip, zoo, or otherwise
>>compress it and then expand it. If even one bit is off, my binaries
>>won't run right!
>
>It will work the same in that respect, but the file is NOT the same.
>IF you doubt this, look at the file with a text editor before you
>compress it, then compress and uncompress using the various
>compression methods listed above and you will find they are different.
>Some files zipped and unzipped are larger than the original!

Sorry, John, you absolutely have no clue whatsoever what youb are talking
about. Present you proof of this to the appropriate newsgroup, one of the
comp.sys groups, those that deal with lossless compression, because there
are literally millions of instances that will prove you dead wrong.

I can take a pile of files, compress them using pkzip, then uncompress
them and do a bit for bit comparison, and will find not s single bit
different.

If, on the other hand, you are talking about taking ANY file from one
system and moving it to another whose disk aloocation quantum is
different, then you're talking apples and screwdrivers here, which,
again, demonstrates that you don't know what you are talking about.

I have NEVER seen a single instance of the behavior that you are talking
about, where I compress and decompress a file, compare it to its original
version, even asking the question "how many bytes are in this file?" The
answer is: exactly the same as the original.

You've made a serious chanrge against numerous organisations and
individuals in the above assertion. I suggest you either provide hard
evidence of supporting your assertion, or retract it in a hurry.

>>I don't have the Targa spec off-hand, but I do know that there are
>>image formats that have purely lossless compression, i.e. GIFs. That
>>is, the data after expansion is identical to the data before
>>expansion.
>
>There may be, but I haven't seen one yet, and I've looked.

Not very hard, you haven't.

>>If you claim that you can *see* the difference between an image that
>>has undergone lossless compression and one which has not, you are
>>imagining things. Your frame buffer really doesn't care if the data
>>were compressed, or stored on punch-cards before being loaded.
>
>Take any high resolution graphics file, use something like JPEG, then
>view the file again only this time zoom in (320x200) and look at the
>pixels. You will see the difference in the before vs the after.

But most JPEG compressors DO NOT CLAIM TO BE LOSSLESS. Thye do not make
that assertion.

You, on the other hand, have made the utterly unssupportable and
demonstrably false grand, sweeping assertion that "all" compression
schemes loose data. In your assertion above, you assert that, in answer
to the posters question about using PKzip, that even text files WILL be
different, an assertion that is provably wrong in every single text file,
every single binary file that you put through it.

If you are saying that it is POSSIBLE to detect differences in LOSSY
compression schemes, fine. That's demonstrably the case. Simply take the
original, flip it's polarity, summ it with the compressed-uncompressed
version and you cannot sum to zero.

However, your assertion that ALL compression schemes are lossy is
unsupportable.

John, sorry, but you have now landed youself in a position which is
technically indefensible.

Terry Rosen

unread,
Mar 20, 1995, 12:50:26 PM3/20/95
to
In article <3k7hcb$q...@netnews.upenn.edu> jdeg...@ix.netcom.com (John DeGroof) writes:
>From: jdeg...@ix.netcom.com (John DeGroof)
>Subject: Re: Was there ever an analog compact disc?
>Date: 15 Mar 1995 09:11:27 GMT

>In <3k5i92$5...@agate.berkeley.edu> j...@research.att.com (jj, curmudgeon and
>all-around grouch) writes:

(edited for effect)
> As time goes on, computers are getting faster... more and more bytes...
>sampling ...fill the data storage format used....

All of this is very interesting, but aren't you forgetting something? It's
still processing of some sort that's getting between the music and the
listener. So what if it's a billion bits and a sampling rate in the middle
of channel 5--can you do it all with a handful of transistors? That's the
digital dilemma.

John DeGroof

unread,
Mar 20, 1995, 12:54:27 PM3/20/95
to
In <3kdbed$n...@tolstoy.lerc.nasa.gov> how...@kaiwan.com (Howard
Christeller) writes:

>If the analog signal is band-limited to 20 KHz, 44.1 KHz sampling is
>enough. Without proof that the transducers on each end of the audio
>chain, much less our ears, can do more than 20 KHz, then there is NO
>need for higher sampling rates.

Oh come on, do you listen to sine waves or music? Think about it!
We're not trying to achieve greater than 20KHz signals, just more
samples to get the harmonics. A side effect would be that we can
sample frequencies higher than 20KHz, but this is ONLY a side effect!



>I doubt that many people can achieve 96 dB S/N ratio acoustically in
>a home environment. 16 bits is pretty good for a consumer format.
>A few more bits might help, and are surely useful in the recording
>studio, but 24 bits gives you 144 dB. 64 bits is absurd - 384 dB!

I greatly disagree. If you are the type of person that listens to
music in the car, ok, but if you are anywhere near a serious
audiophile, you would agree with me.

Again, you're looking at another side effect. We're not trying to get
a higher SPL, just more resolution. Imagine your volume control
notched with 4 notches. You may not like any of these settings, as one
is surely too loud, and another is too quiet. If you had more notches,
you could find the perfect listening level. This is what we are
trying to achieve with more bits. The ultimate would be what you have
now, an analog control, with an infinite number of notches.

>Stereophile reviewed the Audio Research VT-150 and called it
>"...closer to the live musical experience than I've heard from any
>amplifier." The manufacturer rates its noise at 98 dB below
>maximum output, weighted. Get the idea yet?

S/N ratio is something entirely different. Our recording consoles S/N
ratio is around 127dB. I would do more research on what S/N and
noise floors really are, since you are showing your lack of knowledge.

>I think that you are blaming the standards instead of the poor
>implementations that we have all heard. Only recently have CD
>players started to approach the limits imposed by the standards.

They've always had these limits. The original design was a flaw. The
CD was released in a hurry to save the music industry from disco.
Disco was something you went out and did, not bought. The CD was
released to increase spending in music, and to try to kill disco. They
have admitted that it was released about 5 years too early. If they
had waited, the CD standards would have been much higher.

BTW, do explain what you mean by that last sentence about CD players
just now reaching the limits. Nothing has changed.

>In my opinion, it is the analog portions of today's digital audio
>systems which are the weakest link. Why else the interest in
>replacing opamps or improving power supply regulation? Moving more
>bits will not improve the sound if we are limited by the noise and
>distortion of the analog stages. How about fixing the part that IS
>broken, instead of the part that works?

Not at all. I've never heard anything better than a full analalog. To
be specific, I used 24 track analog tape with Dolby SR. If you could
hear the best of analog and the best of digital with the pro equipment
we have in the studio, you would have no question in your mind either.

I'm not saying analog is perfect, and we are always coming up with
better opamp and power supply designs, but it's by far not the weakest
link. Speakers would be the weakest link.

Again, "moving more bits" has nothing to do with the analog noise, and
if you read my response above, it does improve the sound. The part
that is broken is the digital part. You have almost nailed it
literally, since the digital waveform is a "broken" waveform. That's
the problem that could be fixed with a higher sampling rate.

Andre Yew

unread,
Mar 20, 1995, 12:58:59 PM3/20/95
to
In <3kijkk$7...@eyrie.graphics.cornell.edu> jdeg...@ix.netcom.com (John DeGroof) writes:

>It will work the same in that respect, but the file is NOT the same.
>IF you doubt this, look at the file with a text editor before you
>compress it, then compress and uncompress using the various
>compression methods listed above and you will find they are different.
>Some files zipped and unzipped are larger than the original!

Wow, I noticed this too! I find that the associated data-manipulation
algorithms has a lot to with it, but there is no one perfect algorithm --- they
all have tradeoffs. For example, if I use an O(n^3) string sorting algorithm
for the LZW decompression, the data is harsher (it read like Dick Pierce on
a bad day!) and grainier. This is because the complexity order is odd and
reverberates bad odd harmonics throughout the data. I then tried an O(n^2)
algorithm and the data tightened up (all the loose spaces turned to taut tabs)
and it was more pleasing. In fact, I find that keeping everything even, like
line-numbers, number of 's's (only they matter, go figure!), and other important
parameters, resulted in an astonishing transparency and palpability of the
data, but still not so much so that you can all chuck your analog computers.
Now, the algorithm IS expensive --- it takes 10 hours to run on an Alpha multi-
processor --- but it would still be a phenomenal bargain at twice the runtime!

Next, I'm going to replace the EPROMS of my computer with faster
parts that have teflon insulating layers, silver metal layers, and vacuum tube
output buffers (for truth of timbre!) on the data lines. With this increase in
speed, my 25 Mhz 68030, is just as fast or even subjectively faster than a 50
Mhz 68030 --- it goes over twice as much data at half the required speed!
Think of it as two processors trying to race over hills --- the 50 MHz one has
to carry a bigger clock crystal and has to go over TWICE the number of hills,
so it goes slower. Get it?

Now don't any of you textbook engineers tell me what to think --- you
just compute with your pencils and notepads while I do it by actually typing
and feeling, not thinking about, the data. Accept no substitutes! Besides,
I've never seen any of those blue pencil-protector information theory people
agree on anything! They can't even decide if a program will halt!

--Andre

P.S. 11 shopping days left til April 1, and I'm done with my shopping already.

Richard D Pierce

unread,
Mar 20, 1995, 1:02:27 PM3/20/95
to
In article <3kiou7$7...@eyrie.graphics.cornell.edu>,

John DeGroof <jdeg...@ix.netcom.com> wrote:
>So what you're saying is that the space between the steps of resolution
>with 16 bits is the same as the space between the steps of resolution
>with 64 bits, only because there are more bits, there are more steps?
>(which also gives us the huge dynamic range). I'm using the steps
>analogy because it's the clearest one I could come up with for now.

Each higher order "bit" contributes a factor of two to the total number
of unambiguous states to the total system. And that translates to
extended dynamic range. Period.

>Lets say there was a machine capable of 64 bit samples. If I'm
>recording close to 0 level (highest possible in digital), I'm using
>almost all of the 64 bits, and therefore have more resolution, and a
>greater number of volume steps within a specified dynamic range, or do
>you disagree?

Fine, I agree, but let's look at the consequences of that. That means
that the smallest unambiguous change is is 1 bit, or 1 in 2^64, and
that's 386 dB below 0 level. Let's set up our gain structure such that
digital saturation (your 0 level) corresponds to a sound pressure level
of 140 dB SPL. That means that the system (if it was realizable) is
recording changes in sound pressure level that is 246 dB BELOW the
smallest change detectable by the ear under the BEST circumstances, and
about the same below the random thermal noise of air molecules. You
have, under the very best of circumstances, utterly wasted 41 bits out of
your 64 to simply record the noise of air molecules banging into each
other. You have wasted 41 bits to attempt to produce sounds that are 246
dB below the sound of the blood flowing through your ears. To what end.

The resoltuion of your ears is set by the lowest level it can detect, and
it's as much noise as sensitivity limited.

>Also, 64 bits of resolution does give a rather absurdly large amount of
>dynamic range. If the statement in the first paragraph is true, how
>much dynamic range do you feel we need, and how many bits would it take
>to achieve this? We're testing a new digital multitrack recorder (that
>hasn't been released yet), and the trend seems to be towards more bits.
>The machine I'm referring to is a Studer 48 track digital, which can
>use 24 bit samples. If you're saying 17 bits is enough, why are the
>new machines 24 bits? How much dynamic range does this equate to?

The dynamic range is simply 20 log10(2^n) where n is the number of bits.
Or roughly, 6 dB per bit.

Further, I am the lead programmer on a professional digital audio editing
workstation. I can rightly claim that internally it uses 32 bits. This is
NECESSARY for gain, mixing and such calculations to prevent overflow and
truncation. But it's a 16 bit machine.

How much do we need? Well, again, assuming an ABSOLUTE level of 120 dB
SPL and ASSUMING (incorrectly) that the listening environment has a noise
level below the threshold of hearing, that means we need a dynamic range
of 120 dB, which is 20 bits, max. On the other hand, assuming that the
listening room has a noise floor of 20 dB, that the recording venue has a
similar noise floor, that we limit ourselfs to more realistic listening
levels (head banging, ear-bleeding rock excepted for the moment), then we
find that dynamic ranges on the order of 90 dB are adequate, and that's
encodable unambiguously by 16 bits. So, the answer lies somewhere between
16 and 20 bits for all but the most pathological of recording and
listening venues.

>There have been two responses on the need (or lack of) for more bits in
>digital audio, but no comments on the need for a higher sampling rate,
>except by those that have never seen an actual digital waveform to know
>that they don't look like pure sine waves.

Huh? I work on this stuff all day, John, and the sine waves I put in damn
sure look like sine waves coming out, to the point where the entire
system is adding NO MORE THAT 0.005% THD at ANY frequency. Where you got
your information from, I have no idea, but I sure hope you didn't pay
anything for it.

> Does this mean everyone
>agrees we need a higher sampling rate? I sure hope so.

No, because your premise is wrong.

>>ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
>>FORMAT? Care to back that up with hard evidence? For example, you are
>>asserting that if I take an audio file (the actual sample length and
>>sample rate is completely up to you) and compress it with PKZIP,
>
>I originally said I don't have much experience with audio data
>compression methods and don't trust them because of my experience with
>video compression, therefore I wouldn't have any "hard evidence" to
>show you. It's a matter of faith, or lack thereof in my case.

No, John, you stated ALL compression schemes have losses, and further, in
another article in this thread, asserted that even pkzip will change a
file. Now you are claiming "little experience." Which is it? If I had the
choice, well, I would vote for, uhm, guess what?

Adour Vahe Kabakian

unread,
Mar 20, 1995, 1:03:21 PM3/20/95
to
In article <3kip1i$7...@eyrie.graphics.cornell.edu>,
GMGraves <gmgr...@aol.com> wrote:

>..But we need to change from linear quantization to logrithmic.

I addressed your mistakes in the rest of your post (24 dB S/N,
etc). But you bring out a very interesting point. Is there any
research documenting the effects of the spacing of discrete amplitude
levels?

In other words, what kinds of functions f have been used and
studied, such that Amplitude = f (# bits)?

-adour

Steven Abrams

unread,
Mar 20, 1995, 2:41:05 PM3/20/95
to
In article <3kijkk$7...@eyrie.graphics.cornell.edu>

jdeg...@ix.netcom.com (John DeGroof) writes:
> In <3kb5oc$p...@netnews.upenn.edu> abr...@cs.columbia.edu (Steven Abrams)
> writes:
> >All compression routines lose data to some degree? Gee. I hope my
> >software will work after I PKZIP, compress, gzip, zoo, or otherwise
> >compress it and then expand it. If even one bit is off, my binaries
> >won't run right!
>
> It will work the same in that respect, but the file is NOT the same.
> IF you doubt this, look at the file with a text editor before you
> compress it, then compress and uncompress using the various
> compression methods listed above and you will find they are different.
> Some files zipped and unzipped are larger than the original!

You could not be wronger if you tried.

Bitwise comparisons of zipped, gzipped, compress(1)'ed, zoo'ed,
arc'ed, and any other lossless compression will reveal no differences,
unless bugs are introduced into the code. The algorithms used for
compression are, essentially, tranformations which are (this is
becomming a common point, apparently) one-to-one. That is, an input
string of data maps to one and only one compressed string of data, and
every compressed string of data maps to one and only input string of
data.

The only possible way that I can think of that a file can be a
different size when compressed and then uncompressed is if it was
compressed on system a, which reports file sizes in bytes of disk
blocks used, and uncompressed on system b, which reports file sizes in
actual bytes occupied by data. But this is not relevant to anything.

> >I don't have the Targa spec off-hand, but I do know that there are
> >image formats that have purely lossless compression, i.e. GIFs. That
> >is, the data after expansion is identical to the data before
> >expansion.

> There may be, but I haven't seen one yet, and I've looked.

GIF files are losslessly compressed. Unfortunatley, they are an 8 bit
format. ZIP a file in your favorite uncompressed format; you now have
a losslessly compressed image file format.

> Take any high resolution graphics file, use something like JPEG, then
> view the file again only this time zoom in (320x200) and look at the
> pixels. You will see the difference in the before vs the after.

JPEG is not necessarily lossless. JPEG encoders normally have a
user-controlled parameter for the tradeoff between size and quality.
A lossless compression is often achieved by using 100% quality, but I
don't know if it is a requirement of the JPEG standard that a lossless
compression be possible.

A little knowledge, apparently, is a dangerous thing. Do you know
anything about how data are compressed? Do you know that data are
losslessly compressed whenever it is transfered over a modern modem,
using similar techniques to those used by PKZIP? Do you claim to be
able tell the difference between an image which was transfered via
modem and one which was not? What about an image which was stored on
a disk using Double-Space or Stacker, two other widespread lossless
compression schemes. Do you have any idea how badly a computer
system would fail if these schemes (and the MacIntosh RAM doubler,
too) did not produce output which was bitwise identical to the
original input?

Do you know what LOSSLESS means?

Robert F. Antoniewicz

unread,
Mar 20, 1995, 6:29:16 PM3/20/95
to
In article <3kip1i$7...@eyrie.graphics.cornell.edu>, you write:
|> Let's see: 64 bits, 96 Khz sounds nice. Unfortunately it is not practical
|> or necessary.
|> More bits are not necessarrily what is needed here, although 24 would
|> improve things at the bottom of the loudness scale IF we stay with linear
|> quantization. But we need to change from linear quantization to
|> logrithmic.
|> For example, the only time that all 16 bits are working in
|> today's system, is when the sound level is at maximum permissable
|> recording level. If you drop the recording level by 3dB, you now are
|> utilizing only 8 bits which can quantify only 256 different amplitude
|> levels.

Excuse me, but 2**16 gives 65536 different voltage levels (if you include
zero, then the max is 65535 times some tiny delta of voltage). Now, when
you drop 3dB of power (6dB of voltage), you have half the magnitude which,
given the same tiny delta (because our system is linear), is approximately
the 32767 voltage level (65535/2=32767). 2**15 is 32768 (again if you allow
for a zero voltage level, then the max level for 15 bits is 32767). Hence,
for the 3dB power loss (6dB voltage loss) you lose only one bit.

In addition, the change of the least significant bit (LSB) corresponds to
20*log{1/(2**16)}=-221.8 dB. This means the DC shift of the LSB is a
-222 dB change with respect to the max voltage. At 20 bits, this number
becomes -278 dB. This is pretty good resolution.

Nonlinear voltage conversions would tighten up one area of the range, and
loosen another. Depending on what type of music you listen to, and the
application of the nonlinearity, this can be bad or good. But all around, I
think linear is hard to beat.

Now, 96kHz is another story.... This is a good idea, but, perhaps a little
higher would be better. Heaven knows the technology is advancing quickly
enough. The current standard is pretty old.

Bob A.

PS - When I did the math, I was really surprised to see the resolution
value! No wonder you can clearly hear recorded hiss and noise!
--

Of course you must realize, that I do not
speak for the organization I work for.

Richard D Pierce

unread,
Mar 21, 1995, 8:44:37 AM3/21/95
to
In article <3kkfgj$o...@tolstoy.lerc.nasa.gov>,

John DeGroof <jdeg...@ix.netcom.com> wrote:
>In <3kdbed$n...@tolstoy.lerc.nasa.gov> how...@kaiwan.com (Howard
>Christeller) writes:
>
>>If the analog signal is band-limited to 20 KHz, 44.1 KHz sampling is
>>enough. Without proof that the transducers on each end of the audio
>>chain, much less our ears, can do more than 20 KHz, then there is NO
>>need for higher sampling rates.
>
>Oh come on, do you listen to sine waves or music? Think about it!
>We're not trying to achieve greater than 20KHz signals, just more
>samples to get the harmonics. A side effect would be that we can
>sample frequencies higher than 20KHz, but this is ONLY a side effect!

John, the two are EXACTLY equivalent! Please go check with Mr. Fourier
before you make such a claim.

What are the harmonics: well, in both a theoretical AND a very real,
verifiable, physical sense, they ARE sine waves.

>>I doubt that many people can achieve 96 dB S/N ratio acoustically in
>>a home environment. 16 bits is pretty good for a consumer format.
>>A few more bits might help, and are surely useful in the recording
>>studio, but 24 bits gives you 144 dB. 64 bits is absurd - 384 dB!
>
>I greatly disagree. If you are the type of person that listens to
>music in the car, ok, but if you are anywhere near a serious
>audiophile, you would agree with me.
>
>Again, you're looking at another side effect. We're not trying to get
>a higher SPL, just more resolution. Imagine your volume control
>notched with 4 notches. You may not like any of these settings, as one
>is surely too loud, and another is too quiet. If you had more notches,
>you could find the perfect listening level. This is what we are
>trying to achieve with more bits. The ultimate would be what you have
>now, an analog control, with an infinite number of notches.

John, you simply do not seem to understand the very nature of encoding
with binary bits: your volume control analogy is flawed for a variety of
reasons: first, you do not state what the weighting of each "step" is.
Second, you're 4 step volume control can be represented by a 2 bit
system. Third, the resolution of any system is DEFINED by the ratio of the
noise in the system to the maximum output of the system. (Gee, that
happens to be the RECIPROCAL of dynamic range).

The "system" we are tralking about here is human hearing. There is a lower
level below which changes are indistingishable. That lower limit is set by
a combination of phenomenon that conspire together to limit the resolution
of the ear, phenomenon such as sound-induced hearing loss, the sound of
blood moving through the cappilaries surrounding the ear, and, ultimately,
the noise floor created by the random collisions of air molecules under
the influence of random thermal motion. THAT'S THE ULTIMATE LOWER LIMIT OF
RESOLUTION.

Fine, let's build an encoding system that will have two more bits then
needed for the floor. That gives us 12 dB of range BELOW the noise floor.

Now, let's put the UPPER at the point at which short term exposure can
cause hearing damage, which is about 120 dB SPL, and add 12 dB to that,
just for the hell of it.

Now, the result of that is a dynamic range, from, 12 dB BELOW the
smallest unambiguous change in air pressure that is detectable by the
BEST ear, to 12 dB above the level where that same ear would start to
suffer irreversable damage, add that to the difference between those two
levels, 120 dB, and, guess what, we get a MAXIMUM dynamic range of 144 dB
or so.

And that dynamic range is completely encodable by 24 bits.

Don't like dynamic range? Fine, let's use resolution. Now, instead of a
dynamic range of 144 dB above the zero level, we'll have a resolution of
144 dB BELOW the loudest encodable level.

And that's a resolution of 24 bits.

>S/N ratio is something entirely different. Our recording consoles S/N
>ratio is around 127dB. I would do more research on what S/N and
>noise floors really are, since you are showing your lack of knowledge.

John, but your pettycoat is showing too. That noise floor in your ANALOG
console IS the limit of resolution in your syste, like it or not. Put in
a purely analog signal below that noise: now, how can you tell at anmy
instant what part of the voltage coming out is UNAMBIGUOUSLY signal and
which part ios UNAMBGUOUSLY noise? You can't: the noise has provided an
absolute lower level of knowability, of unambiguity, of, like it or not,
RESOLUTION.

Resolution, noise, maximum output, dynamic range word length are all
intimately and inextricably related.

>>I think that you are blaming the standards instead of the poor
>>implementations that we have all heard. Only recently have CD
>>players started to approach the limits imposed by the standards.
>

>BTW, do explain what you mean by that last sentence about CD players
>just now reaching the limits. Nothing has changed.

NOTHING HAS CHANGED? Sorry, John, this is up there with your proclamation
that all compression schemes change data.

Much has changed: the introduction of oversampling techniques and noise
shaping, the elimination of high-order analog anti-imaging filters, and
so on and so on have changed rather dramatically.

>Not at all. I've never heard anything better than a full analalog.

That's your opinion, to which you are fully entitled.

>To
>be specific, I used 24 track analog tape with Dolby SR. If you could
>hear the best of analog and the best of digital with the pro equipment
>we have in the studio, you would have no question in your mind either.

And there are many that would disagree.

And there are perfectly objective listening experiments that can
demonstrate that such an analog chain IS perturbing the signal more than
it should, and more than a well-done digital chain. You may not LIKE the
less perturned signal, but that does not remove the fact that it IS less
perturbed.

>Again, "moving more bits" has nothing to do with the analog noise, and
>if you read my response above, it does improve the sound.

A bitwise representation of a signal ambiguates the signal at the level
of one least significant bit. A continuous representation with noise of a
signal ambiguates the signal at the level of the noise. Simple as that.
The bits provide the ultimate floor in resolution on a digital system,
the noise provides the ultimate floor in resolution in an analog system.

>The part
>that is broken is the digital part. You have almost nailed it
>literally, since the digital waveform is a "broken" waveform. That's
>the problem that could be fixed with a higher sampling rate.

A band limited system OF ANY KIND limits the total number of changes that
can occur in time. Period. Makes no difference if its a discrete
time-sampled system or a band-limited analog system: It's the bad-limiting
that's the problem, John, NOT the sampling per se.

I'd agree with your points IF you could demonstrate that necessity for
bandwidths greater than 20 kHz AND the ability of current analog storage
and playback technology to exceed that same limit.

Rachel McKay

unread,
Mar 21, 1995, 9:01:58 AM3/21/95
to
In article <3k1qf3$2...@geraldo.cc.utexas.edu>, ly...@teleport.com says...
>
>In article <3jm04q$3...@netnews.upenn.edu>, "Kevin Sutton"
><Sut...@crop.cri.nz> wrote:
>
>> A friend asked me the other night if anyone ever thought of
>> bringing out an analog compact disc, prior to the invention of
>> the digital version. That is, a CD using LaserDisc picture
>> type technology (FM modulation?) to store audio on a disc.
>>
>> Does anyone know if this was ever attempted and, if so,
>> whether the system ever saw the light of day?

Funny you should mention it but . . . this was something I thought of
years ago. Eventually I gave up on the idea but basically it was:

Write an analogue stereo waveform onto a compact disc and read using a
3-D holographic laser thingy. I didn't get as far as calculations because
I couldn't find an envelope.

My second thought was to use the 3-D laser technology being talked about
at the time (late 1980s) to write and read a 3-D holographic digital
signal onto CDs. This way you could have, say, 8, 16 or whatever bit
words written vertically with the same horizontal pitch as standard CDs.
I did hear of a professor from Portsmouth Polytechnic joining Sony in
Japan to "continue the reasearch" into this very same idea. Never heard
anything of it since . . .

--
Rachel McKay <rm...@ntms.bt.co.uk>
Software Engineer, BT.

Monte P McGuire

unread,
Mar 21, 1995, 11:25:00 AM3/21/95
to
In article <3kkfgj$o...@tolstoy.lerc.nasa.gov>,
John DeGroof <jdeg...@ix.netcom.com> wrote:
>
>>I doubt that many people can achieve 96 dB S/N ratio acoustically in
>>a home environment. 16 bits is pretty good for a consumer format.
>>A few more bits might help, and are surely useful in the recording
>>studio, but 24 bits gives you 144 dB. 64 bits is absurd - 384 dB!
>
>I greatly disagree. If you are the type of person that listens to
>music in the car, ok, but if you are anywhere near a serious
>audiophile, you would agree with me.
>
>Again, you're looking at another side effect. We're not trying to get
>a higher SPL, just more resolution. Imagine your volume control
>notched with 4 notches. You may not like any of these settings, as one
>is surely too loud, and another is too quiet. If you had more notches,
>you could find the perfect listening level. This is what we are
>trying to achieve with more bits. The ultimate would be what you have
>now, an analog control, with an infinite number of notches.

That's just not the way it works. As long as a digital signal has
been dithered properly upon conversion, those bottom bits are
perfectly random and you get a nice uncorrelated noise floor, no
'steps' are possible. The only time you'd want 'infinite notches' is
when you have no dither, and that's just silly anyways; you'd still
have asymptotically small distortion, why not have _zero_ distortion,
use dither and store only a finite number of bits??

You can make the noise floor as low as you like with more bits, but
please don't state that analog has an infinitely low noise floor;
that's what you imply with the 'infinite number of notches' argument.
The only time you'll get that is in a world at 0 degrees Kelvin where
you won't be around to enjoy it.

I'm all for high resolution and certainly for recording, it's nice to
have more than 16 bits, but I think 24 bit linear PCM should be enough
to be imperceptably different from any higher resolutions.

>S/N ratio is something entirely different. Our recording consoles S/N
>ratio is around 127dB. I would do more research on what S/N and
>noise floors really are, since you are showing your lack of knowledge.

After the mix bus?? I think not! That's just the E.I.N. of the mike
amp at high gains and it really is just the noise of a 200 ohm
resistor at room temperature. Measure again at 20dB gain. What about
the two fader follower amps? What about the mix bus stage? Show me a
console that sums 24 inputs to stereo and has a noise floor less than
-100dBV and I'll buy you dinner.

The only way to quiet a console down is to use very low valued summing
resistors and faders (or reduce the temperature severely ;-) Liquid
nitrogen is not a resonable solution since transistor gain falls off
down there and it's a little expensive. Dry ice and acetone is pretty
good because FETs pick up extra gain down there (a free 6dB gain at
~170 Kelvin), but bipolars aren't that happy and you only get sqrt(2)
noise improvement so what's the point.

Low valued resistors are possible, but this can cause distortion and
crosstalk problems due to the large drive currents and ground currents
that would be required. Designers have traded this potential
distortion off for a sensible noise floor of -90dBV or so because our
storage devices haven't traditionally been able to store anything near
-80dBV accurately until digital arrived. Of course, I've measured
some ($30K professional) 32 input consoles that have noise floors at
-70dBV, but they were crap anyhow.

Learn more about how noise in analog and digital systems works; it
really is important to know. Otherwise, you'll be unable to take
advantage of what modern devices can offer and you'll be stuck using a
limited range of equipment for the wrong reasons.

Regards,

Monte McGuire - N1TBL
mcg...@world.std.com

Steven Abrams

unread,
Mar 21, 1995, 1:27:23 PM3/21/95
to
Dick Pierce wrote:
>ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
>FORMAT? Care to back that up with hard evidence? For example, you are
>asserting that if I take an audio file (the actual sample length and
>sample rate is completely up to you) and compress it with PKZIP,

In article <3kiou7$7...@eyrie.graphics.cornell.edu>
jdeg...@ix.netcom.com (John DeGroof) responded:

> I originally said I don't have much experience with audio data
> compression methods and don't trust them because of my experience with
> video compression, therefore I wouldn't have any "hard evidence" to
> show you. It's a matter of faith, or lack thereof in my case.

Based on your experience with video, would you think that if you took
an uncompressed image file, PKZIPped it, and then PKUNZIPped it, the
resulting image file would be indistinguishable from the original in:

a) bit-by-bit comparison
b) viewing on the screen

Just want to know for sure where your "faith" is coming from.

SysOp

unread,
Mar 21, 1995, 1:30:42 PM3/21/95
to
jdeg...@ix.netcom.com (John DeGroof) writes:

Just as a disclaimer, I'm not an audio expert, but I feel I still
have some knowledge in the digital area....

[...]


> Also, 64 bits of resolution does give a rather absurdly large amount of
> dynamic range. If the statement in the first paragraph is true, how
> much dynamic range do you feel we need, and how many bits would it take
> to achieve this? We're testing a new digital multitrack recorder (that

[...]

Someone else posted the formula for db as "20*log(2 ^ Bits)". The
thing to do is decide what you want your dynamic range should be, and
then that, automatically, determines how many bits you want.
Personally, I think the 96db given by 16 bits is "good enough", but
I'm open to the idea of 120db which is achievable by 20 bits. Any
more than this, and I think it would have to be explained how
physical limitations will be impact the sound. (I believe others
have mentioned the limits of mics, and I wonder about the limit of my
hearing.) Not that I feel qualified to pick the allowed dynamic
range, but you did ask. ;-)

[...]


> There have been two responses on the need (or lack of) for more bits in
> digital audio, but no comments on the need for a higher sampling rate,
> except by those that have never seen an actual digital waveform to know

> that they don't look like pure sine waves. Does this mean everyone

> agrees we need a higher sampling rate? I sure hope so.

This I don't understand. Any sine wave below 22kHz should be able to
be perfectly played back as a sine wave. Now, if you mean you record
a frequency above 22kHz, then, of course you're not going to get it
back.

Do you believe you can hear above 22kHz? (I'm not flaming, I am
genuinely interested, as most claims are <20kHz.) Whatever you want
your highest reproducable frequency to be, multiply that by 2, and
that is your sampling rate (via the Nyquist theorem).

[...]


> I originally said I don't have much experience with audio data
> compression methods and don't trust them because of my experience with
> video compression, therefore I wouldn't have any "hard evidence" to
> show you. It's a matter of faith, or lack thereof in my case.

I think you chose poor wording. :-) While most picture storage
formats (GIF, IFF, etc.) are lossless, as are most all of the archiving
programs (ZIP, LHa, etc.), video and audio compression are more often
(dare I say it!) lossy, due to the high bandwidth. But it's something
that can be clearly defined, and is chosen by the designers of a
given system. (JPEG is lossy ;you can store pictures and audio
either way, with or without loss in the compression.) At any rate,
you don't need "faith", this stuff can be quantified.

Just as an observation, back to the topic of number of bits; it's a
bit misleading thinking of "steps", as that is merely the digital
storage format. The filter will roll off high frequencies, and that
will "smooth" out the steps. (I'm not sure if you or anyone else was
suggesting that the steps were "audible" as artifacts, but I thought
it was worth pointing out.... It's something I have thought about,
and I've had to convince myself that everything was OK. :-) )

I don't think I've gone out on a limb on any of the above, but I'm
sure others will point out any problems. :-)

---
Gary Wolfe "I'm not sleeping around..." -- Ned's Atomic Dustbin
tlvx!sy...@sinkhole.unf.edu or tlvx!sy...@interphase.com

Charles King

unread,
Mar 21, 1995, 1:34:17 PM3/21/95
to
64 bits is obviously too many, but what about 18 or 20?
With the introduction of DVD, a 20 bit consumer format is
viable - anyone have any ideas on how likely it is?

---
Charles King
cha...@anat.ucl.ac.uk

Terry Rosen

unread,
Mar 21, 1995, 1:42:53 PM3/21/95
to
Yes, I worked for a record company in R & D and we made microdisks
as small as 1.5" in diameter. They were cut at 90 and 133 rpm (there were
actually four sizes/speeds). They were all analog and were produced and sold
mostly overseas--especially in Japan. I cut thousands of masters, including
Elvis, Simon /Gar., Dire Straits, etc., plus educational stuff. I still have
a few. The disks were played with a special player embodying a rotor-based
pickup--that is, the record stayed stationary while it played. It was all
great fun.

Terry

Richard D Pierce

unread,
Mar 21, 1995, 4:53:02 PM3/21/95
to
In article <3kn5qb$o...@tolstoy.lerc.nasa.gov>,

Steven Abrams <abr...@cs.columbia.edu> wrote:
>Dick Pierce wrote:
> >ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
> >FORMAT? Care to back that up with hard evidence? For example, you are
> >asserting that if I take an audio file (the actual sample length and
> >sample rate is completely up to you) and compress it with PKZIP,
>
>In article <3kiou7$7...@eyrie.graphics.cornell.edu>
>jdeg...@ix.netcom.com (John DeGroof) responded:
>> I originally said I don't have much experience with audio data
>> compression methods and don't trust them because of my experience with
>> video compression, therefore I wouldn't have any "hard evidence" to
>> show you. It's a matter of faith, or lack thereof in my case.
>
>Based on your experience with video, would you think that if you took
>an uncompressed image file, PKZIPped it, and then PKUNZIPped it, the
>resulting image file would be indistinguishable from the original in:
>
> a) bit-by-bit comparison
> b) viewing on the screen
>
>Just want to know for sure where your "faith" is coming from.

I don't know who you are adressing here, since your attribution method is
a little bass-ackwards.

However, I have NO faith that it will or will not work. I don't rely on
faith, rather I rely on the facts. If there exists a file conatining a
graphics image, of ANY kind, that I can view on the screen, and I
compress that file with a lossless compressor (such as PKZIP) and
uncompress it (with PKUNZIP, for example), it WILL pass both the tests
you suggest:

It will be bit-for-bit identical

It will appear identical on the screen (assuming someone hasn't
diddled with the monitor controls).

No faith involved. It will behave the way it does, regardless of what I
BELIEVE it must do.

Mark Brindle

unread,
Mar 21, 1995, 7:58:14 PM3/21/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:

<paraphrase: digital audio standard should be 64 bits at 96 kHz>

OK, let's try this out and see where it leads. For the absolute
ultimate in resolution, we should set the A/D's "gain" such that
the least significant bit of the 64-bit data word represents the
arrival (or departure) of a single electron at the analog input.
Since we're counting individual electrons, the digital data has
exactly the same "granularity" as the "continuous" analog input.

With this encoding scheme, 0 indicates that *NO* electrons have
arrived or departed during the 1/96000 second sampling period.
(More precisely, it indicates that there were equal numbers of
"incoming" and "outgoing" electrons during the sampling period.)

Since we're dealing with an AC signal, the full-scale positive
and negative values are +(2^63) and -(2^63) -- corresponding
to net arrival/departure rates of roughly 10^19 electrons per
sampling period -- or about 10^24 electrons per second.

That's pretty close to 1 mole of electrons -- which, according
to Mr. Faraday, is about 10^5 coulombs. Thus, a 96kHz, 64-bit
A/D converter should have a *minimum* full-scale input current
of around 100,000 amps. (Actually, more like 142,000 amps.)

With a standard 47k ohm input impedance, the full-scale input
voltage will be about 6.67 gigavolts -- and each input will
dissipate about 667 trillion watts.

To keep the analog input noise below the 0.72 nV quantization
level and help burn-off some terra-watts, it might be a good
idea to immerse the A/D (and the orchestra) in liquid helium.

...don't forget the Monster Cables,

Mark

Mark Brindle

unread,
Mar 21, 1995, 8:44:19 PM3/21/95
to
Richard D Pierce (DPi...@world.std.com) wrote:

: The bits provide the ultimate floor in resolution on a digital system,

: the noise provides the ultimate floor in resolution in an analog system.

Just a minor nit-pick, Richard -- in a really good A/D, resolution
is limited by noise in the *analog* front-end; the last few bits
of "digital resolution" simply dance around in the analog noise.

For example, we make a (low speed, 1.0 V full-scale) A/D with an
LSB-weight of 10 nV and an analog "noise-floor" of about 100 nV
(rms). Although the "digital resolution" is almost 27 bits, the
analog dynamic range is only about 24 bits -- so, we spec it as
a "24 bit" A/D.

At least at low sampling rates, getting more "digital bits" isn't
all that difficult or expensive -- the *real* trick is designing
a low-noise analog front-end with very high common-mode rejection.
For reasonable ease of use, a "24 bit" A/D should have a CMRR of
at least 140 dB. (A 64-bit A/D needs a CMRR of about 380 dB!)

...even bits are analog,

Mark

Tom Kong

unread,
Mar 21, 1995, 11:42:04 PM3/21/95
to
In article <3kmqi1$5...@netnews.upenn.edu>,

Richard D Pierce <DPi...@world.std.com> wrote:
>
>John, but your pettycoat is showing too. That noise floor in your ANALOG
>console IS the limit of resolution in your syste, like it or not. Put in
>a purely analog signal below that noise: now, how can you tell at anmy
>instant what part of the voltage coming out is UNAMBIGUOUSLY signal and
>which part ios UNAMBGUOUSLY noise? You can't: the noise has provided an
>absolute lower level of knowability, of unambiguity, of, like it or not,
>RESOLUTION.
>

There seems to be a common misconception that noise level has to be
below signal level for the signal to be detected. That is, S/N
ratio has to be greater than 0 db for the signal to be detected.

If this were true, communication in deep space would be impossible,
nor would conversation in a noisy bar be possible. Shannon's theorem
gives us the limit to our information rate given any S/N ratio.
So with a S/N ratio below zero db (ie, noise is louder than signal),
our information rate is low, but non-zero. This of course assume
that the noise is uncorrelated to the signal.

/tom

Bob Myers

unread,
Mar 22, 1995, 12:10:18 AM3/22/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:
> So what you're saying is that the space between the steps of resolution
> with 16 bits is the same as the space between the steps of resolution
> with 64 bits, only because there are more bits, there are more steps?
> (which also gives us the huge dynamic range). I'm using the steps
> analogy because it's the clearest one I could come up with for now.

The "space between the steps" only has meaning when we've defined
the signal we're trying to digitize, in terms of the maximum permissible
value. For years now, the standard line-level audio signal peaks
out at a volt or so; let's be generous and say that we need our
supposed 64-bit system to handle a signal swing that covered 10 volts.
With 2^64 possible values, the "size" of the LSB would be roughly:

10V/2^64 = (approx.) 5.4 x 10^-19 V.

Just for grins, now, try to figure out the thermal noise voltage
present across, say, a 75 ohm resistor at 25 deg. C over a reasonable
audio bandwidth of maybe 20 kHz. Get the point?

> Lets say there was a machine capable of 64 bit samples. If I'm
> recording close to 0 level (highest possible in digital), I'm using
> almost all of the 64 bits, and therefore have more resolution, and a
> greater number of volume steps within a specified dynamic range, or do
> you disagree?

But you have not specified a fixed "dynamic range"; that's the whole
point. The number of bits DETERMINES the dynamic range, and all that's
left in order to specify the "step size" is to fix the upper limit, as
was done above. Side question: how much dynamic range do you think is
delivered by the best current analog recording medium? Kindly avoid
the use of nonsense phrases like "analog has infinite resolution."

> to achieve this? We're testing a new digital multitrack recorder (that

> hasn't been released yet), and the trend seems to be towards more bits.
> The machine I'm referring to is a Studer 48 track digital, which can
> use 24 bit samples. If you're saying 17 bits is enough, why are the
> new machines 24 bits? How much dynamic range does this equate to?

Figure it out; if the dynamic range of a 16 bit system is about 96 dB,
what would you expect for 24?

But the point of 24-bit recording isn't exactly dynamic range; it's
to provide extra bits at the bottom end so that digital processing can
be performed (digital processing being essentially mathematical
manipulation of the data) without rounding and other errors affecting
the 16 bits that are to be delivered to the consumer.

> There have been two responses on the need (or lack of) for more bits in
> digital audio, but no comments on the need for a higher sampling rate,
> except by those that have never seen an actual digital waveform to know
> that they don't look like pure sine waves. Does this mean everyone
> agrees we need a higher sampling rate? I sure hope so.

What in the world is an "actual digital waveform"? Do you disagree
with the proposition that a sampling rate of N Hz permits all signal
components below N/2 Hz to be reproduced EXACTLY, limited only by the
quantization error? What is the purpose of a "reconstruction filter"?

> I originally said I don't have much experience with audio data
> compression methods and don't trust them because of my experience with
> video compression, therefore I wouldn't have any "hard evidence" to
> show you. It's a matter of faith, or lack thereof in my case.

Again, given your claimed experience in video compression, are you
unfamiliar with such techniques as run-length encoding? Wouldn't
you say that that's a lossless compression technique?

Bob Myers | my...@fc.hp.com
Senior Engineer, Displays & Human Interface | Note: The opinions presented
Workstation Systems Division | here are not those of my employer
Hewlett-Packard Co., Ft. Collins, CO | or of any rational person.

John Johnston

unread,
Mar 22, 1995, 10:25:30 AM3/22/95
to
On a related topic, there was also a mid-80's attempt to commercialize a
laser-read analog LP technology. The company was called Finial Technology,
and the technology was basically killed off in the CD onslaught.

John DeGroof

unread,
Mar 22, 1995, 10:30:39 AM3/22/95
to
In <3kkf92$o...@tolstoy.lerc.nasa.gov> rose...@varney.idbsu.edu
(Terry Rosen) writes:

>All of this is very interesting, but aren't you forgetting something? It's
>still processing of some sort that's getting between the music and the
>listener. So what if it's a billion bits and a sampling rate in the middle
>of channel 5--can you do it all with a handful of transistors? That's the
>digital dilemma.

I totally agree with you! I would like to see an analog format
similar to laserdiscs using lasers to avoid wear. The only problem
is that mastering is almost always done on a computer, in the
digital domain, and therefore an analog format wouldn't benefit
us at all. With digital, we can get damn close to the real thing
in theory, but I have a motto that applies to more than just
music (like pictures): Nothing beats the real thing!

SDuraybito

unread,
Mar 23, 1995, 2:55:27 PM3/23/95
to
my...@hpfcla.fc.hp.com (Bob Myers) writes:

>But you have not specified a fixed "dynamic range"; that's the whole
>point. The number of bits DETERMINES the dynamic range, and all that's
>left in order to specify the "step size" is to fix the upper limit, as
>was done above. Side question: how much dynamic range do you think is
>delivered by the best current analog recording medium? Kindly avoid
>the use of nonsense phrases like "analog has infinite resolution."

I think another way to look at this is to specify different bit rates for
a given dynamic range. You can use 8, 16, 20, 24 or more bits to define,
say, 96 dB of dynamic range. We seem comfortable that 16 bits does a
credible job. I think DeGroot is implying that 20 bits or higher would
allow for finer differences in spl levels between bits. This can be
thought of as "high resolution."

I would also say that until we have evidence that an unzipped ZIP file has
errors, the system works. Then again, can we safely say that until we see
aliens they don't exist?

JJMcF

unread,
Mar 24, 1995, 11:45:06 AM3/24/95
to
The basic questions here seem to be (1) Do you believe that the current
digital recording systems have audible artifacts and (2) can these be
minimized or eliminated by "upgrading" the digital system by adding bits
or increasing sampling rate. The latter hope may be based on analog
analogies--for example, tape recording has definite "tape" artifacts at 1
7/8 ips, but 15 ips tape has hardly any of these. Same for disc recording
and so on. What we really need is listening comparisons with upgraded
digital systems. Unfortunately, unlike tape or disc systems, these
upgraded systems are largely in the hands of technicians and engineers who
tend not to believe that the current systems in fact do have audible
artifacts, and are unsympathetic to upgrading projects.

Tim Takahashi

unread,
Mar 25, 1995, 12:48:16 AM3/25/95
to
>my...@hpfcla.fc.hp.com (Bob Myers) writes:
>
>>But you have not specified a fixed "dynamic range"; that's the whole
>>point. The number of bits DETERMINES the dynamic range, and all that's
>>left in order to specify the "step size" is to fix the upper limit, as
>>was done above. Side question: how much dynamic range do you think is
>>delivered by the best current analog recording medium? Kindly avoid
>>the use of nonsense phrases like "analog has infinite resolution."

This argument is weak in that it presumes that parameters such
as "dynamic range" and "s/n ratio" are all that are needed to
describe the musical performance of equipment. Nothing could be
further from the truth.

I have experienced extraordinary realism from 78rpm disks, which
have a nominal 40db s/n ratio and 50-9000 hz frequency response.

I have tried to caputre that on 65db s/n
ratio 7-1/2 ips open reel, or 80db s/n ratio dolby C cassettes
and you experience the meaninglessness of "s/n" ratio.
I havent tried this with a digital recorder, but I envision
the transfer will still be imperfect.

Regarding sampling rates, etc. Nyquist applies to steady state
signals with no phase component. I would anticipate that a
44khz 16-bit sampling system will have measurable error on
wide band transient signals...

Certainly when I perform data acquisition for industrial projects
10-12 samples per wavelength are desirable, not 2. (and that
counts in the antialiasing filtration, which can then be less steep)

While PKZIP/UNZIP is lossless digital compression, many schemes
are not lossless (MD and DC were "lossy" compression schemes),
the algorithm Ma Bell uses certainly isnt lossless either.

tim

Russell DeAnna

unread,
Mar 27, 1995, 10:32:04 AM3/27/95
to
Tom Kong <t...@tyres.asd.sgi.com> wrote:
>
>There seems to be a common misconception that noise level has to be
>below signal level for the signal to be detected.

If I take the output of a capacitive pressure
sensor which is being driven by a low signal, and display it on
an HP oscilliscope, it looks like noise. But the HP scope has an
averaging option. If I average over 256 samples, a small signal
emerges out of the morass. Would this be considered a situation
where the signal is below the noise? The signal in this case
is a periodic sine wave of infinite duration. I suppose, if
you averaged long enough, any periodic signal could be seen.

--
- Russell DeAnna to...@lerc.nasa.gov

Lon Stowell

unread,
Mar 28, 1995, 11:33:44 AM3/28/95
to
In article <3l6m6u$c...@geraldo.cc.utexas.edu> to...@lerc.nasa.gov (Russell DeAnna) writes:
>
>If I take the output of a capacitive pressure
>sensor which is being driven by a low signal, and display it on
>an HP oscilliscope, it looks like noise. But the HP scope has an
>averaging option. If I average over 256 samples, a small signal
>emerges out of the morass. Would this be considered a situation
>where the signal is below the noise? The signal in this case
>is a periodic sine wave of infinite duration. I suppose, if
>you averaged long enough, any periodic signal could be seen.

If you take any noise with signal in it....and average the entier
thing,...then you logarithmically amplify the result, then you
average that, then you log amp the result, and so on, sooner
or later you have very nice odds of being able to pick the signal
out. Surprising how well this works...or did at the Foreign
Technology Labs at Wright Patt long enough ago that the
classification is no longer in force.

Human sound perception seems very aptly selected to do a fairly
decent job of this as long as the noise and signal are not
well correlated.

Kong Kritayakirana

unread,
Mar 28, 1995, 12:32:37 AM3/28/95
to
In article <3kv642$d...@agate.berkeley.edu>,

SDuraybito <sdura...@aol.com> wrote:
>I think another way to look at this is to specify different bit rates for
>a given dynamic range. You can use 8, 16, 20, 24 or more bits to define,
>say, 96 dB of dynamic range.

If we are talking about linear PCM, and thinking about having quantization
noise uncorrelated with the signal, then 16 bit will give you 98.1dB dynamic
range (sorry folks! it's NOT 96dB. The right one to use is 6.02N+1.76 dB for
N bits)

Using 8 bits to get that low noise is impossible no matter what you do.
Using more than 16 bits to get the same level of noise is suboptimal.

[rest of "resolution" discussion deleted]

>I would also say that until we have evidence that an unzipped ZIP file has
>errors, the system works. Then again, can we safely say that until we see
>aliens they don't exist?

Have you ever read and try to understand how Lempel-Ziv coding works? It's
not about something you observe. It is encoding/decoding algorithm which
guarantees that for any pattern of bits possible the encoding/decoding will
work perfectly (given that you are not running PKZIP on a Pentium. :) :)
Well so far someone hasn't noticed an error in their work, but there's a
small chance that PKZIP is flawed. But then again, it uses 64-bit CRC for
error checking. Which means if the LZ compression itself fails and produces
a correct CRC then we are doomed. :)

Mark Brindle

unread,
Mar 28, 1995, 12:46:39 AM3/28/95
to
SDuraybito (sdura...@aol.com) wrote:

: I think another way to look at this is to specify different bit rates for


: a given dynamic range. You can use 8, 16, 20, 24 or more bits to define,
: say, 96 dB of dynamic range. We seem comfortable that 16 bits does a
: credible job. I think DeGroot is implying that 20 bits or higher would
: allow for finer differences in spl levels between bits. This can be
: thought of as "high resolution."

Not so! As long as you're talking about linear (as opposed to
companding) ADCs and DACs, "dynamic range" and "number of bits"
are *absolutely* equivalent. They're simply different *units*
for expressing the same quantity -- it's *exactly* analogous to
measuring a distance in "meters" vs "feet".

There's nothing "arbitrary" about "pairing" 96 dB and 16 bits;
by definition, the two are mathematically *identical*:

20 * log(2^16) = 96.33 dB

Replace the exponent in the log term with *any* other power of
two, and you get a *totally different* dynamic range:

20 * log(2^8) = 48.16 dB

Thus, you can't "use 8 bits to define 96 dB dynamic range" any
more than you can "use 2 inches to define a kilometer". OTOH,
I agree that this is what you and Mr. DeGroot have proposed.

: I would also say that until we have evidence that an unzipped ZIP file has


: errors, the system works. Then again, can we safely say that until we see
: aliens they don't exist?

Wrong! We can say with absolute certainty that *each and every*
unzipped ZIP file is 100% bit-for-bit identical to the original.
With any LOSSLESS compression/decompression algorithm, there is
a provable *mathematical equivalence* between the compressed and
uncompressed representations. It's *mathematically impossible*
to have "errors" of the type that you suggest; as long as the
algorithm is implemented correctly, the result is not debatable.

The best analogy I can offer is that, for a properly programmed
pocket calculator, (A + B) *INVARIABLY EQUALS* (B + A). Happily,
mathematics is sufficiently reliable that we don't have to test
the truth of this assertion for every possible (A,B) combination.
And apparently, it's possible to implement software that ALWAYS
gets the right answer.

(2+2 = 3) ...for small values of 2,

Mark

R!ch

unread,
Mar 30, 1995, 2:56:26 AM3/30/95
to
On 28 Mar 1995, Mark Brindle wrote:

> There's nothing "arbitrary" about "pairing" 96 dB and 16 bits;
> by definition, the two are mathematically *identical*:
>
> 20 * log(2^16) = 96.33 dB
>
> Replace the exponent in the log term with *any* other power of
> two, and you get a *totally different* dynamic range:
>
> 20 * log(2^8) = 48.16 dB
>
> Thus, you can't "use 8 bits to define 96 dB dynamic range" any
> more than you can "use 2 inches to define a kilometer". OTOH,
> I agree that this is what you and Mr. DeGroot have proposed.

I gotta admit I'm slightly confused here. I think I understand the
maths above, but I still don't see why one can't have more resolution
with a given dynamic range. (Am I correct in thinking that dynamic
range is the difference between the quietest signal and the loudest
one?)

I like to think of this situation as a volume control on an amplifier.
The maximum output is when the control is fully clockwise, the
minimum when it's fully anticlockwise. The way I percieve it, the\
number of bits is like having indentations in the volume control - the
more indentations you have, the finer you can control the level you
want to listen at.

With this analogy, with 16 bits you get 2^16 'indentations', but with
8 bits, you only get 2^8 indentations. So, even though the maximum &
minimum levels are the same, the steps between them are finer for
more bits.

Putting it another way, suppose we have 96 dB of dynamic range, and
our volume control has 96 (linear) steps: each step == 1 dB change.
If we change the control to have 192 (linear) steps (but leave the
dynamic range the same), each step will == 0.5 dB change: more
resolution.

I'm not arguing that the number of bits and SNR *aren't* linked,
but I can't understand why it has to be linked to resolution.
Thinking about, the only link I can see is that when you have a
big enough number of steps, the difference between them is swamped
by the noise, so I agree that there is a finite limit (limited by
the noise in the system) amount of available resolution.

---
R!ch

If it ain't analogue, it ain't music.
#include <disclaimer.h> \\|// - ?
(o o)
/==============================oOOo=(_)=oOOo=====\
| Richard Teer ri...@isltd.insignia.com |
| Insignia Solutions |
| Voice: 01494 453409 |
| Fax: 01494 459720 |
\================================================/

John DeGroof

unread,
Mar 30, 1995, 3:00:00 AM3/30/95
to
In <3lf4ro$4...@netnews.upenn.edu> R!ch <Richar...@isltd.insignia.com>
writes:

>I like to think of this situation as a volume control on an amplifier.
>The maximum output is when the control is fully clockwise, the
>minimum when it's fully anticlockwise. The way I percieve it, the\
>number of bits is like having indentations in the volume control - the
>more indentations you have, the finer you can control the level you
>want to listen at.

[quoted text deleted by RD]

The same analogy I tried to make, and like me, no doubt you'll be flamed
for it. I think this is technology is possible, but what noone has
mentinoed yet is the voltage values of the highest and lowest numbers.
If they were the same for different bit vaues, the output (dynamic range)
shoudn't change, but there should be much more digital resolution. I
don't know all the physics and formulas behind the theory, but it makes
sense logically to me.

Chris Caudle

unread,
Mar 30, 1995, 3:00:00 AM3/30/95
to
jdeg...@ix.netcom.com (John DeGroof) wrote:
>In <3lf4ro$4...@netnews.upenn.edu> R!ch <Richar...@isltd.insignia.com>
>writes:
>> The way I percieve it, the
>>number of bits is like having indentations in the volume control - the
>>more indentations you have, the finer you can control the level you
>>want to listen at.
[DeGroof writes:]

> I think this is technology is possible, but what noone has
>mentinoed yet is the voltage values of the highest and lowest numbers.
>If they were the same for different bit vaues, the output (dynamic range)
>shoudn't change, but there should be much more digital resolution.

What you are leaving out is that the value of the _lowest_ numbers
changes. The increase in the noise floor you get from using a smaller
number of bits increases the value of the lowest unambiguous value
you can represent. The maximum stays the same, so the difference,
the dynamic range, i.e. the resolution, decreases.

Chris Caudle
cau...@bangate.compaq.com

Bernhard Muller

unread,
Mar 31, 1995, 3:00:00 AM3/31/95
to
In a previous posting, R!ch (Richar...@isltd.insignia.com) writes:

> I like to think of this situation as a volume control on an amplifier.
> The maximum output is when the control is fully clockwise, the
> minimum when it's fully anticlockwise. The way I percieve it, the\
> number of bits is like having indentations in the volume control - the
> more indentations you have, the finer you can control the level you
> want to listen at.
>
> With this analogy, with 16 bits you get 2^16 'indentations', but with
> 8 bits, you only get 2^8 indentations. So, even though the maximum &
> minimum levels are the same, the steps between them are finer for
> more bits.
>

Let me try a different approach. First remember that dB is a _ratio_.
So the change from 1 volt to two volts is 6 dB. A change from 1000 volts
to 2000 volts is also 6 dB.

Every time you add a bit, you double the available number of intervals.
If, on adding a bit, the interval size stays constant (say 1 uV), you have
twice as many intervals, and the maximum voltage encoded doubles. This
doubling is a 6 dB increase in dynamic range. The smallest voltage
encodable is 1 uV, and the largest is 1*2^n uV. If you want to increase
resolution, your idea seems to be "let's let each interval be just 1/2
uV." What happens now? The Maximum encodable voltage is also halved,and
the dynamic range stays the same. Remember, the dynamic range is just the
ratio (that word again) of the largest encodable voltage divided by the
smallest encodable (non zero) voltage.

So your volume contrrol analogy is flawed because if you make the
increment smaller, the smallest encodable voltage does not stay constant,
it gets smaller as does the largest encodable voltage, which has to be 2^n
times bigger than the smallest. And the dynamic range stays the same.

There are non linear encoding schemes around that make the interval per
bit change depending on where in the useful range you are. But these all
have pretty severe other problems, aknd linear encoding seems to be the
best all around solution.


Andrew Charles +1 708 979 0800

unread,
Mar 31, 1995, 3:00:00 AM3/31/95
to
In article <3lf4ro$4...@netnews.upenn.edu> Richar...@isltd.insignia.com
writes:

>On 28 Mar 1995, Mark Brindle wrote:
>> There's nothing "arbitrary" about "pairing" 96 dB and 16 bits;
>> by definition, the two are mathematically *identical*:
>>
>> 20 * log(2^16) = 96.33 dB
>>
>I gotta admit I'm slightly confused here. I think I understand the
>maths above, but I still don't see why one can't have more resolution
>with a given dynamic range. (Am I correct in thinking that dynamic
>range is the difference between the quietest signal and the loudest
>one?)

What I think you're missing is that decibels are ratios. Specifically:

n = 20 log(P2/P1) where P2 is the intensity of the sound under
consideration and P1 is the intensity of some reference level.

If X represents the change in intensity corresponding to the least
significant bit then given K bits, what dynamic range can be represented
with K bits?

For ease of calculation, let P1 = intensity X:

N_min = 20 log(X/P1) and N_max = 20 log(((2^k)*X)/P1)

which reduces to:

N_min = 20 log(1) = 0 and N_max = 20 log(2^k)

So their difference (the "dynamic range") is just 20 log(2^k) dB.

Andrew Charles
ac...@intgp1.att.com

Wolfgang Schwanke

unread,
Mar 31, 1995, 3:00:00 AM3/31/95
to
sdura...@aol.com (SDuraybito) writes:

>I think another way to look at this is to specify different bit rates for
>a given dynamic range. You can use 8, 16, 20, 24 or more bits to define,
>say, 96 dB of dynamic range.

No.

The number of bits DEFINES the dynamic ranges.

Dynamic range is the relation between the smallest and the largest
signal we can still reproduce, i.e. the faintest sound just above the
noise level, and the loudest sound just below overload. In _digital_
systems that is equivalent to the value of the largest digital word
to the smallest. (In analogue it's simply the range between tape
hiss and tape overload, for example).

The largest is "all bits set to 1": 2^(no. of bits).
The smallest is simply the value "1".

Therefore: Dynamic range in digital is equivalent to 2^(#bits) : 1,
and then the logarithmic stuff applied to it for scaling (definition
of dB).

8 bits: 2^ 8 = 256 "steps", gives 20 * log(256 : 1) = 48 dB
16 bits: 2^16 = 65536 "steps", gives 20 * log(65536 : 1) = 96 dB
24 bits: 2^20 = 1048576 "steps", gives 20 * log(1048576 : 1) = 144 dB

>We seem comfortable that 16 bits does a
>credible job. I think DeGroot is implying that 20 bits or higher would
>allow for finer differences in spl levels between bits. This can be
>thought of as "high resolution."

And it's exactly the same as dynamic range, just a different word for it.

>I would also say that until we have evidence that an unzipped ZIP file has
>errors, the system works. Then again, can we safely say that until we see
>aliens they don't exist?

ZIP is not magic. It's an algorithm made by people using approved theories
and logic. We can safely say it always works, unless the algorithm has
a bug.

Greetings

Wolfgang

--
* Wolfgang Schwanke * TU Berlin * | Strasse der Leidenschaft, Strasse des
* wo...@cs.tu-berlin.de * | Gluecks. Ich schalt dich ein und kann
* wolf...@w250zrz.zrz.tu-berlin.de * | es kaum ertragen wenn die Geigen dein
* IRCNICK wolfi * | Ende ansagen. (Die Mimmis)

Mark Brindle

unread,
Mar 31, 1995, 3:00:00 AM3/31/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:

: The same analogy I tried to make, and like me, no doubt you'll be flamed
: for it. I think this is technology is possible, but what noone has

: mentinoed yet is the voltage values of the highest and lowest numbers.
: If they were the same for different bit vaues, the output (dynamic range)

: shoudn't change, but there should be much more digital resolution. I

: don't know all the physics and formulas behind the theory, but it makes
: sense logically to me.

No technology, physics, electronics, or formulas involved; it's
just a matter of simple carpentry! You wish to build staircase
from "ground level" to the "top floor" -- and your *ONLY* design
constraint is that *ALL* the risers must be the *same* height.

Now, please explain how you can select the height of the risers
*independently* from the number of steps -- without changing the
elevation of the top floor.

...this is an open-book test,

Mark

Dave Platt

unread,
Apr 2, 1995, 4:00:00 AM4/2/95
to
>The same analogy I tried to make, and like me, no doubt you'll be flamed
>for it.

Well, the analogy is somewhat flawed, because it relies on an intuitive
model which doesn't match well onto the way that the digital process
actually works.

> I think this is technology is possible, but what noone has
>mentinoed yet is the voltage values of the highest and lowest numbers.
>If they were the same for different bit vaues, the output (dynamic range)
>shoudn't change, but there should be much more digital resolution. I
>don't know all the physics and formulas behind the theory, but it makes
>sense logically to me.

Here's how I can best sum up what you're missing, I think:

[1] You want to be able to define a bunch of intermediate voltage
levels, "in between" the existing ones, to add resolution.

[2] To do this, you'll need to be able to resolve an analog voltage
in steps which are smaller than the ones currently used (maybe half
the size, if you want 1 additional bit of resolution, or maybe much
smaller if you want several more bits of resolution), and assign
some extra bits to the number so that you can represent the voltages
in digital form.

[3] In a linear-quantization system, the voltage step between ANY two
adjacent numbers is the same (to within the linearity error of the
converters).

[4] Therefore: if you can make smaller voltage steps between any two
adjacent output levels, then you can make the same smaller voltage steps
between any other two adjacent levels.

If you can divide the voltage step between the "most positive" code and
its neighbor in half (so that you get more resolution between these two
values), you can make the same division of voltage between the code for
"zero output" and either of its neighbors (the "really tiny little
output voltages").

So... what have you done? By cutting the "minimum voltage step between
adjacent codes" in half, to increase resolution, you have also reduced
the "minimum positive" and "minimum negative" voltages in half. You can
now get the same accuracy-of-representation (the same relative amount of
quantization error) for a waveform which is only half the amplitude of
what you could handle before... and you still have the same maximum
output amplitude that you used to have...

... which means that you have just increased the dynamic range of the
system by 6 dB!

Q.E.D.: increasing the resolution of the system is precisely equivalent
to increasing its dynamic range.
--
Dave Platt dpl...@3do.com
USNAIL: The 3DO Company, Systems Software group
600 Galveston Drive
Redwood City, CA 94063

R!ch

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
On 31 Mar 1995, Richard D Pierce wrote:

[SuperSnip]
> The total number of different states that can be represented by a
> collection of appropriately weighted bits is simply equal to 2^n, where n
> is the number of bits. The smallest possible change AT ANY SIGNAL LEVEL is
> represented by a change in state of the least significant bit, thus, the
> smallest unambiguous change in voltage is equal to the maximum voltage
> divided by the total number of states. Thus, if you have a 16 bit CD
> player that is capable of a peak output of (to make the math simple) 2
> volt peak-to-peak, and 16 bits give a total of 2^16 distinct states
> (that's 65536), then the smallest unambigous voltage change from the
> player will be 2V/65536, or roughly 30.5 microvolts. The resulting
> dynamic range is simply 20 log (30.5uV/2V) = 96dB.

Is the encoding done by CD 16 bits + a sign bit, or is it 16 bits in
total? If it is the latter, does this mean that we *should* be discussing
a system with 15 bits worth of SNR?

> Let's compare that to VERY good 15 IPS analog tape machine: let's adjust

Why not use a 30 IPS analogue machine; that would be a better machine
to compare against. (Or would it - how difference does the tape speed make,
anyway?)

> +- 1 volt. This noise is an ambiguous, unpredictable signal. If you get a
> change in your signal voltage equal to or less than the noise voltage,
> how do you know that that voltage change is due to a real signal change,
> or to noise? Answer is, you don't. The noise ambiguates the signal.

I think a lot of this discussion seems to imply that detail below the
noise floor is undicernable to the listener; I'd like to state that in
some circumstance, I *can* can hear what's below the noise floor.
I agree that there may be some ambiguity there, though.

> Now, what we HAVEN'T mentioned about the analog machine is that, unlike
> the digital system, whose ambiguity or resolving power remains constant
> with signal level, the analog machine actually has significantly worse
> resolving capability as the amplituide of the signal increases. This is
> due to a variety of effect, such as bias modulation noise, non-linearities
> in the tape, and so on. So while the digital system has a constant 30.5uV
> ambiguity, independent of level, the analog system actual has MORE
> ambiguity at higher levels. Modulation noise can increase the effective
> noise floor in the presence if large (especially high-frequency) signals
> by as much as 20dB or more.

This may be true, but is not also true that most, if not all, digital
recording are done at nowhere near 0dB, because of the zero overhead
inherent in the digital system? At least analogue starts clipping
graciously.

[Lots of other stuff snipped]

I'm beginning to agree that we may have sufficient bits (though perhaps
settling for a 24 bit standard would be better), but there's still the
aspect of the 44.1 KHz sampling rate. Nyquist sez that this is sufficient
for a 20 KHz bandwidth limited signal, but natural sounds don't just
stop at 20 KHz; the harmonics go waaaayy up. While we may not be able
to directly hear these harmonics, our hearing of them doesn't just cut off;
it's a gradual thing. Also, is it not possible that these high frequency
harmonics could affect the lower frequency tones?

If it is argued that the sampling rate is fast enough - the question *still*
remains: why do so many people (myself included) find CD and digital
recording in general so inferior to good analogue, or the real thing? All
this theory is fine, but *something* must explain why so many people find
digital (in it's current state) so abhorent.

Some folk have said that it's because of digital's lack of distortions.
If that is the case, why don't I find going to a live concert fatiguing?
Obviously, a live, acoustic concert will also be 'clean', because there's
no processing (excepting room acoustics) going on, so what gives??

Don Perley

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
R!ch <Richar...@isltd.insignia.com> wrote:
+ On 28 Mar 1995, Mark Brindle wrote:
+
+ > There's nothing "arbitrary" about "pairing" 96 dB and 16 bits;
+ > by definition, the two are mathematically *identical*:
+
+ Putting it another way, suppose we have 96 dB of dynamic range, and
+ our volume control has 96 (linear) steps: each step == 1 dB change.
+ If we change the control to have 192 (linear) steps (but leave the
+ dynamic range the same), each step will == 0.5 dB change: more
+ resolution.

It sounds like you are saying you want resolution instead of louder
music. Think of it this way: Say you add 8 bits, and have a recording
that still hits the maximum number. You play it and say "WAY too
loud!" so you turn down the volume. Now you have that extra
resolution you wanted, measured in microvolts/step or whatever.

"Dynamic range" doesn't just mean louder peaks.. it is the ratio between
the loudest signal and the smallest (aka resolution).

-Don Perley

Richard D Pierce

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
In article <3lp5d9$j...@tolstoy.lerc.nasa.gov>,

R!ch <Richar...@isltd.insignia.com> wrote:
>On 31 Mar 1995, Richard D Pierce wrote:
>> Thus, if you have a 16 bit CD
>> player that is capable of a peak output of (to make the math simple) 2
>> volt peak-to-peak, and 16 bits give a total of 2^16 distinct states
>> (that's 65536), then the smallest unambigous voltage change from the
>> player will be 2V/65536, or roughly 30.5 microvolts. The resulting
>> dynamic range is simply 20 log (30.5uV/2V) = 96dB.
>
>Is the encoding done by CD 16 bits + a sign bit, or is it 16 bits in
>total? If it is the latter, does this mean that we *should* be discussing
>a system with 15 bits worth of SNR?

Nope, it's the total number of bits. The use of one of the bits as a sign
bit does not change the total number of states. 15 bits gives you 32768
states, and the sign bit tells you whether its positive or negative, for
a total of 65536 states, 16 bits worth. The limits of a 16 bit signed
system are from -32768 to 32767, while for a an unsigned 15 bit system
it's only 0 to 32767. The former has twice the range of the latter: 6 dB.

>> Let's compare that to VERY good 15 IPS analog tape machine: let's adjust
>
>Why not use a 30 IPS analogue machine; that would be a better machine
>to compare against. (Or would it - how difference does the tape speed make,
>anyway?)

IF the electronics support it (and the playback EQ curves are not defined
beyond 20 kHz anyway), it'll give you an extra octave at the top at the
expense of loosing an octave at the bottom. IF the electronics support
it (and rarely they will), it'll give you 3 dB more headroom.

>> +- 1 volt. This noise is an ambiguous, unpredictable signal. If you get a
>> change in your signal voltage equal to or less than the noise voltage,
>> how do you know that that voltage change is due to a real signal change,
>> or to noise? Answer is, you don't. The noise ambiguates the signal.
>
>I think a lot of this discussion seems to imply that detail below the
>noise floor is undicernable to the listener; I'd like to state that in
>some circumstance, I *can* can hear what's below the noise floor.
>I agree that there may be some ambiguity there, though.

Yes, signals are detectable below the noise ("ambiguity") floor in either
case. Looking, however, at individual samples, be they from an analog or
a digital machine, it is not possible to disambiguate signal changes form
the noise changes. Over large intervals it is, in either system.

>> Now, what we HAVEN'T mentioned about the analog machine is that, unlike
>> the digital system, whose ambiguity or resolving power remains constant
>> with signal level, the analog machine actually has significantly worse
>> resolving capability as the amplituide of the signal increases. This is
>> due to a variety of effect, such as bias modulation noise, non-linearities
>> in the tape, and so on. So while the digital system has a constant 30.5uV
>> ambiguity, independent of level, the analog system actual has MORE
>> ambiguity at higher levels. Modulation noise can increase the effective
>> noise floor in the presence if large (especially high-frequency) signals
>> by as much as 20dB or more.
>
>This may be true, but is not also true that most, if not all, digital
>recording are done at nowhere near 0dB, because of the zero overhead
>inherent in the digital system? At least analogue starts clipping
>graciously.

Graciously? That's generous. At midband frequencies, maybe, but at 0 VU
above a couple of kHz, there are already saturation artifacts starting to
appear.

Further, don't blame the medium for people's misuse of it. It's possible
to adjust the gain structure in the mastering and production process so
that this is not an issue.

>I'm beginning to agree that we may have sufficient bits (though perhaps
>settling for a 24 bit standard would be better), but there's still the
>aspect of the 44.1 KHz sampling rate. Nyquist sez that this is sufficient
>for a 20 KHz bandwidth limited signal, but natural sounds don't just
>stop at 20 KHz; the harmonics go waaaayy up.

How waaaay up? What's significant?

>While we may not be able
>to directly hear these harmonics, our hearing of them doesn't just cut off;
>it's a gradual thing.

How gradual? Hearing tests on young adult males, for example, show that
while they might have normal hearing sensitivity at 90 dB SPL at 15 kHz,
they have NO sensation at 18 or 20 kHz. That's hardly a "gradual thing".

>Also, is it not possible that these high frequency
>harmonics could affect the lower frequency tones?

Only if there is some non-linearity in the system (the WHOLE system, from
instrument to brain) that causes intermodulation products.

>If it is argued that the sampling rate is fast enough - the question *still*
>remains: why do so many people (myself included) find CD and digital
>recording in general so inferior to good analogue, or the real thing? All
>this theory is fine, but *something* must explain why so many people find
>digital (in it's current state) so abhorent.

But the assertion that so many people find CD and digital inferior MUST
reconcile that view with the fact that so many people don't. I won't for
a moment argue that CD's aren't anywhere near as good as the "real
thing." But I will argue that LP's aren't either. I have in my collection
CD and LP versions of the same performance and, in some cases, the same
mastering and mixdown. Those that sound awful on the CD generally sound
awful on the LP as well (the Chapuis Bach Organ series on Valois, for
example). THose that sound good on the CD sound good on the LP also. And
looking at the whole sample of several hundred of each, I find about the
same proportion of each to be offensive in similar ways. The Columbia LP
series of the Biggs Bach organ performances are awful: multi-mic'ed,
vague, diffuse images, very edgy, harsh: kinda sounds like many peoples'
complaints about CD's. Guess what? I'm not going to bother with the CD
re-releases because the whole recording and mastering concepts used are
simply flawed. On the other hand, the Gilbert Couperin series on Harmonia
Mundi are superb in both the LP and the CD version, with the CD holding a
slight edge in the low end.

>Some folk have said that it's because of digital's lack of distortions.
>If that is the case, why don't I find going to a live concert fatiguing?
>Obviously, a live, acoustic concert will also be 'clean', because there's
>no processing (excepting room acoustics) going on, so what gives??

And lots of noise and other things. But in most recordings, you simply
cannot compare the live performance with the recording, because you
weren't sitting where the microphones were sitting. Further, you aren't
privvy to the recording, mastering and production process which is where,
in my opinion, MOST of the faults in either LP or CD reproductions are
introduced. For example, many of the Telarc recordings are so clean as to
be antiseptic. They suck, but it has little to do with the medium: like
many early examples of recorded media, they went overboard in attempting
to demonstrate the "superiority" of the medium, and made it sound awful
as a result. The few Telarc analog recordings I have heard frankly
sounded equally bad.

Bob Myers

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:
> I totally agree with you! I would like to see an analog format
> similar to laserdiscs using lasers to avoid wear. The only problem
> is that mastering is almost always done on a computer, in the
> digital domain, and therefore an analog format wouldn't benefit
> us at all. With digital, we can get damn close to the real thing
> in theory, but I have a motto that applies to more than just
> music (like pictures): Nothing beats the real thing!

This argument is based, as many are, on the erroneous assumption that
music somehow "is" analog in nature. It is not; "analog" in the
electronics sense does not in any way mean something that is "more like"
sound in its nature. Analog and digital recordings are simply two
different ways of REPRESENTING sound, and at the current (and forseeable)
state of the art of the two, the various digital techniques come FAR
closer to accurately representing the original sound than ANY analog
medium.

Mark Brindle

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:

: Again, you're looking at another side effect. We're not trying to get
: a higher SPL, just more resolution. Imagine your volume control
: notched with 4 notches.

Now, imagine a volume control with 18,446,744,073,709,551,616 notches.
Obviously this is a big improvement over one with a mere 65536 notches;
but, to make good use of it, you're gonna need a *really big* knob.

Since there's no point in doing things half-way, I'd recommend using
a knob with a diameter of 300,000,000 km. Drill a small hole through
the center of the sun for the volume-control shaft, and position the
knob so that its edge is at a convenient location just outside your
listening-room window.

Of course, you'll want a calibrated control; so, just paint small
tick-marks around the knob's circumference. Be sure to use a fine
paint brush -- and *carefully* mark-off 500,000 notches per inch.
It's gonna require a little patience, but at one notch/second...

...you'll be done in about 600 billion years,

Mark

GMGraves

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
>This argument is based, as many are, on the erroneous assumption that
music somehow "is" analog in nature. It is not; "analog" in the
electronics sense does not in any way mean something that is "more like"
sound in its nature. Analog and digital recordings are simply two
different ways of REPRESENTING sound, and at the current (and forseeable)
state of the art of the two, the various digital techniques come FAR
closer to accurately representing the original sound than ANY analog
medium.<

While I agree with your assesment of the nature of analog vs digital, I
have to disagree with your conclusions. THEORETICALLY digital quantization
comes much closer to accurately representing the original sound than would
analog, but in my opinion, in actual real world terms, it does not. I use
my ears and they say that digital does not serve the music well. We are
also stuck with a system that was conceived in what was really the infancy
of digital signal processing. While we can play with filters and amplifier
stages and jitter and the like, we are still stuck with a standard,
conceived in the 1970's, which responds far less to refienment than does a
strictly analog record and playback system.

George Graves

Noam Bernstein

unread,
Apr 3, 1995, 3:00:00 AM4/3/95
to
John DeGroof (jdeg...@ix.netcom.com) wrote:

: In <3kb5oc$p...@netnews.upenn.edu> abr...@cs.columbia.edu (Steven Abrams)
: writes:

: It will work the same in that respect, but the file is NOT the same.
: IF you doubt this, look at the file with a text editor before you
: compress it, then compress and uncompress using the various
: compression methods listed above and you will find they are different.
: Some files zipped and unzipped are larger than the original!

Just because some IBM compression program is defective doesn't mean
that there's no such thing as a lossless compression.

: There may be, but I haven't seen one yet, and I've looked.

You must not be awake. Maybe there's no program that achieves some
arbitrarily good compression ratio for all data, but there certainly
are lossless compression algorithms. (E.g. replacing each repeated
letter with the number of times it occurs and one of that letter
(works only for data with no numbers) compresses aaaaaaaaaaaa
into 12a. Uncompresses into aaaaaaaaaaaa. Ta Da! No loss).

: Take any high resolution graphics file, use something like JPEG, then
: view the file again only this time zoom in (320x200) and look at the
: pixels. You will see the difference in the before vs the after.

So? JPEG isn't a lossless compression algorithm.

I don't even know why I'm arguing.

Noam

R!ch

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
On Mon, 3 Apr 1995, Richard D Pierce wrote:

> >I'm beginning to agree that we may have sufficient bits (though perhaps
> >settling for a 24 bit standard would be better), but there's still the
> >aspect of the 44.1 KHz sampling rate. Nyquist sez that this is sufficient
> >for a 20 KHz bandwidth limited signal, but natural sounds don't just
> >stop at 20 KHz; the harmonics go waaaayy up.
>
> How waaaay up? What's significant?

I'm not sure how far up they go, or how they're significant. But, just
because we haven't determined how significant things are, it doesn't mean
that they aren't; eg before certain types of distortion were 'discovered',
it wasn't realised that they were significant.

> >If it is argued that the sampling rate is fast enough - the question *still*
> >remains: why do so many people (myself included) find CD and digital
> >recording in general so inferior to good analogue, or the real thing? All
> >this theory is fine, but *something* must explain why so many people find
> >digital (in it's current state) so abhorent.
>
> But the assertion that so many people find CD and digital inferior MUST
> reconcile that view with the fact that so many people don't. I won't for
> a moment argue that CD's aren't anywhere near as good as the "real
> thing." But I will argue that LP's aren't either. I have in my collection

Agreed, but to me, vinyl comes closer.

> And lots of noise and other things. But in most recordings, you simply
> cannot compare the live performance with the recording, because you
> weren't sitting where the microphones were sitting. Further, you aren't
> privvy to the recording, mastering and production process which is where,
> in my opinion, MOST of the faults in either LP or CD reproductions are
> introduced. For example, many of the Telarc recordings are so clean as to

Ah ha, I think here we get to the crux of the matter - incompetant engineers/
producers. I think some of these boneheads should be made to listen to
their material on a good hifi, just so they can see how crap it is.

These people usually argue that they're targetting the lowest common denominator
ie cheap car radios or whatever. The thing is, presumably, most of the
people who listen (seriously) on this sort of gear couldn't care less about
the sound quality anyway. So why not do the best job they can, and mix for
a decent hifi? But I suppose *that's* a different thread altogether...

C.M. Hicks

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
t...@me.rochester.edu (Tim Takahashi) writes:

>I have experienced extraordinary realism from 78rpm disks, which
>have a nominal 40db s/n ratio and 50-9000 hz frequency response.

>I have tried to caputre that on 65db s/n
>ratio 7-1/2 ips open reel, or 80db s/n ratio dolby C cassettes
>and you experience the meaninglessness of "s/n" ratio.
>I havent tried this with a digital recorder, but I envision
>the transfer will still be imperfect.

So how were you doing the transcription, and how were you listening
to the original 78's?

As an illustration as to why this is an important issue, consider a 78
vinyl being played on a large acoustic gramophone machine. Sound
emanates from the whole machine - of course most notably from the horn
itself, but also from the soundbox, the stylus, the disk and the whole
casing of the machine. Now if we try to transcribe that record to tape
using an electrical pickup we are bound not to capture the same sound.

Nimbus in fact remasters some 78's to CD using a genuine old acoustic
gramophone, set up in an acoustically suitable room with a pair of B&K
microphones. They argue that this is the best way to capture that
elusive "78" sound. Notice that they make a stereo recording, despite
the source (ie the vinyl) being mono.

>Regarding sampling rates, etc. Nyquist applies to steady state
>signals with no phase component.

Nyquist says nothing of the sort. The sampling theorem talks about
bandwidths, and makes no distinction between steady-state and
transient signals of the same bandwidth. I don't understand what you
mean by the phrase "no phase component". Any signal has a phase - it
is that which tells where the signal occurs along the time axis.

>I would anticipate that a
>44khz 16-bit sampling system will have measurable error on
>wide band transient signals...

It will certainly have measurable error on signals of greater than
22kHz bandwidth and greater than 97dB dynamic range. Whether these
constraints make the system unsuitable for audio reproduction is a
different issue.

>Certainly when I perform data acquisition for industrial projects
>10-12 samples per wavelength are desirable, not 2. (and that
>counts in the antialiasing filtration, which can then be less steep)

We've been over this so many times in this forum, and in r.a.pro and
r.a.tech. I agree that the truth in this case is not particularly
intuitive, but it honestly does work. Provided the sample rate is
greater than twice the bandwidth of the signal of interest then
no information is lost by the sampling process. It follows then
that the signal is completely and unambiguously reconstructible from
those samples.

I agree that in certain applications it is often easier to sample at a
much higher rate than is really required, in order to relax the
filtering requirements. We do this at the conversion stages in audio
equipment (oversampling), but then we filter and decimate in the
digital domain to keep the amount of data manageable.

Christopher
--
=====================================================
Christopher Hicks http://www.eng.cam.ac.uk/~cmh
c...@eng.cam.ac.uk Voice: (+44) 223 332767
=====================================================

Russell DeAnna

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
Richard D Pierce <DPi...@world.std.com> wrote:
>introduced. For example, many of the Telarc recordings are so clean as to
>be antiseptic. They suck, but it has little to do with the medium: like

Does anyone know the details of the Telarc recording process? They list
two microphones. But they don't talk about any signal manipulation. I
was under the impression that they added nothing to the raw-mike feed.

I just borrowed the Telarc/Cleveland Orch/Beethoven-symphony series. And
they are clean. I didn't notice "antiseptic," but I wasn't real impressed
with the energy or lack thereof. In short, these are lifeless records.
I wanted to like them since both Telarc and The Cleveland Orch. are local.
But I much prefer the old CBS/Sony/Columbia Szell/Cleveland series.
Of course, these are also terrible-sounding records, but there's energy.

C.M. Hicks

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
to...@lerc.nasa.gov (Russell DeAnna) writes:

>If I take the output of a capacitive pressure
>sensor which is being driven by a low signal, and display it on
>an HP oscilliscope, it looks like noise. But the HP scope has an
>averaging option. If I average over 256 samples, a small signal
>emerges out of the morass. Would this be considered a situation
>where the signal is below the noise? The signal in this case
>is a periodic sine wave of infinite duration. I suppose, if
>you averaged long enough, any periodic signal could be seen.

Yes indeed, but think about what this process is doing. By averaging
you are effectively filtering (with a time-varying, and possibly
non-linear filter) the signal. The triggering circuit ensures that the
filter varies in such a manner that the signal you are looking for is
guaranteed to fall in the filter passband. Anything non-periodic, and
anything with a period unrelated to the period of the triggering
signal falls in the stop-band of the filter, and is therefore reduced
in amplitude.

So yes, averaging is a powerful way of bringing signals out of noise,
but there's no such thing as a free lunch. In averaging 256 traces
you have found the signal, by looking at 256 times as much sinal as
before. However you have lost information about anything that is
asynchronous with the trigger (which is, after all, the whole point,
but important to notice in information theoretic terms).

Robert F. Antoniewicz

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
First, I owe an apology to the group for my simple minded error with the
calculator. In my previous post, I calculated the relative magnitude for
the lowest order bit change in a 16 bit word. Inadvertently, I used the
LN key instead of the LOG key and wound up getting some number like 222dB.
My error was pointed out to me, and the correct number is approx 96dB.

Next, even if you assume one bit is a sign bit, this calculation does not
change because whether you count from 0 to 65535 or from -32767 to 32768
(the difference being 65535) the resolution in magnitude is the same.

Finally, I have some questions.

Lets say you have a 16 bit word sampling at 44.1 kHz, and you are trying to
generate a simple sine wave of 11.025 kHz (1/4 of 44.1 kHz). The obvious
result (assuming zero initial phase) is that the samples will be:

32768 65535 32768 1 32768 ..... (for simplicity, magnitude is 32767)

How much phase shift do you get if you start off at 32767? If you start
there, how far off from 90 degrees is the next value (65534)? If you
continue this process, the next value is presumably 32769. How far off is
that from 90 degrees from the previous value (remembering that you have to
go throught the peak of the wave here - another mistake lurking for me to
make :-)
Also, assuming the zero initial phase wave, what is the next lower
frequency that you can represent in 16 bits at 44.1 kHz??? The next
higher one? And how do all these numbers change with bit count and sample
rate?

I know I can figure these things out. But the gray matter is on hold :-)
If I do, I will follow up with the answers. Does anyone have the answers
at hand? It would be an interesting listening test to use a continuous
(analog) wave generator and vary the frequency slightly, and then digitize
and playback the wave and see if you can hear a difference. Try
digitizing at 16 bits and higher (and lower perhaps if you can't hear the
difference). Also, try it at various sample rates.

Bob Antoniewicz

anto...@eagle.dfrc.nasa.gov

Of course you must realize, that I do not
speak for the organization I work for.

Bernhard Muller

unread,
Apr 4, 1995, 3:00:00 AM4/4/95
to
In a previous posting, R!ch (Richar...@isltd.insignia.com) writes:

> If it is argued that the sampling rate is fast enough - the question *still*
> remains: why do so many people (myself included) find CD and digital
> recording in general so inferior to good analogue, or the real thing? All
> this theory is fine, but *something* must explain why so many people find
> digital (in it's current state) so abhorent.

The same phenomenon that convinced many people in the 1650s that there
_must_ be something to this witchcraft business because so many people had
experienced its effects.

John DeGroof

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to

>Ah ha, I think here we get to the crux of the matter - incompetant engineers/
>producers. I think some of these boneheads should be made to listen to
>their material on a good hifi, just so they can see how crap it is.
>
>These people usually argue that they're targetting the lowest common denominator
>ie cheap car radios or whatever. The thing is, presumably, most of the
>people who listen (seriously) on this sort of gear couldn't care less about
>the sound quality anyway. So why not do the best job they can, and mix for
>a decent hifi? But I suppose *that's* a different thread altogether...

Now you've hit home with me. Being an engineer, I can tell you many
horror stories about how little some engineers really know. It's
sickening to see some of these people making records. They just
go through the ranks in a studio, learn how to operate the equipment,
and record, all without any sound/music theory. That knowlege is
the cream that makes us able to call ourselves "engineers", while
the rest should just be referred to as "mixers".

John DeGroof

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
to...@lerc.nasa.gov (Russell DeAnna) writes:

>Does anyone know the details of the Telarc recording process? They list
>two microphones. But they don't talk about any signal manipulation. I
>was under the impression that they added nothing to the raw-mike feed.

They also say in the back of every CD booklet "No compression or
limiting was used during any phase of this recording" or something
close to that. I would have to agree with their philosophy, except
in the case where I want to use such devices for a constituted
effect.

John DeGroof

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In <3kk40c$7...@eyrie.graphics.cornell.edu> DPi...@world.std.com
(Richard D Pierce) writes:

>I can take a pile of files, compress them using pkzip, then uncompress
>them and do a bit for bit comparison, and will find not s single bit
>different.

I just downloaded the current version of pkzip, and just finished
testing it on several files. It doesn't seem to change the size of the
file like the older version did. The old version would take a <1K
file, compress it, uncompress it, and the result would be to the next
highest 1024 interval. For that and a few other reasons, I haven't
been a big fan of pkzip, and haven't kept up with the versions.

As for GIF, I took a GIF file that was 270,926 and converted it to a
bitmap using WinGIF, then converted it back to GIF format. The new
size is 260,913. What's going on there? Granted, the second version
has a better compression ratio, but what was eliminated? I can repeat
that on different GIF's and the sizes are sometimes larger, sometimes
smaller.

>But most JPEG compressors DO NOT CLAIM TO BE LOSSLESS. Thye do not
>make that assertion.

Ok, but someone claimed JPEG was NOT lossless, and that got me upset
since I though it was the one with the most loss. Many people have
argued with me on that one for years. So it claims to not be lossless
in the doc's? Good. I'll have to quote it next time.

>You, on the other hand, have made the utterly unssupportable and
>demonstrably false grand, sweeping assertion that "all" compression
>schemes loose data. In your assertion above, you assert that, in
>answer to the posters question about using PKzip, that even text files
>WILL be different, an assertion that is provably wrong in every single
>text file, every single binary file that you put through it.

No, I said all video compression was different, not all compression. I
also stated I didn't have much experience with audio compression and
don't trust it due to my experiences with video compression. I also
said that binary file sizes were different when unzipped, which I found
out why.

>original, flip it's polarity, summ it with the compressed-uncompressed
>version and you cannot sum to zero.

How do you flip a files polarity, and then sum it? I'm curious.

>However, your assertion that ALL compression schemes are lossy is
>unsupportable.

Someone else said that for me in a reply to my original post.

If anyone wants to reply to this thread, please send e-mail instead of
replying to this group. It's getting a little off topic, although it
faintly relates to audio compression.

--

Russ Arcuri

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In article <3kpfqv$g...@eyrie.graphics.cornell.edu>, jdeg...@ix.netcom.com
(John DeGroof) wrote:

> I totally agree with you! I would like to see an analog format
> similar to laserdiscs using lasers to avoid wear. The only problem
> is that mastering is almost always done on a computer, in the
> digital domain, and therefore an analog format wouldn't benefit
> us at all. With digital, we can get damn close to the real thing
> in theory, but I have a motto that applies to more than just
> music (like pictures): Nothing beats the real thing!

I would agree with the statement "Nothing beats the real thing!" but point
out that no analog recording method yet devised gets you as close to the
real thing as the best digital recording methods do.

Russ

Richard D Pierce

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
In article <3lulbo$g...@geraldo.cc.utexas.edu>,

John DeGroof <jdeg...@ix.netcom.com> wrote:
>In <3kk40c$7...@eyrie.graphics.cornell.edu> DPi...@world.std.com
>(Richard D Pierce) writes:
>
>>I can take a pile of files, compress them using pkzip, then uncompress
>>them and do a bit for bit comparison, and will find not s single bit
>>different.
>
>I just downloaded the current version of pkzip, and just finished
>testing it on several files. It doesn't seem to change the size of the
>file like the older version did. The old version would take a <1K
>file, compress it, uncompress it, and the result would be to the next
>highest 1024 interval. For that and a few other reasons, I haven't
>been a big fan of pkzip, and haven't kept up with the versions.

No, John, you are looking at the difference between file length and
cluster allocation. I have been using pkzip for something like 8 years,
over 4 or 5 different releases, and before that pkarc. I have also used
tar, lharc, gnuzip, and others and not once is the behavior you're
describing attributable to anything but pilot error. If, for example, on
some shells, you ask for ls -l of a file, it will list the file usage
down to the byte, whereas if you use ls -s, it will list it in 1024 byte
blocks. Others will give the file disk allocation (like du) in allocation
units. On DOS, for example, the smallest allocation unit is a cluster,
and a cluster size is dependent upon the disk size. For a floppy disk,
it's 512 bytes. For a 1 gig hard drive, it's 16384 bytes.

>>But most JPEG compressors DO NOT CLAIM TO BE LOSSLESS. Thye do not
>>make that assertion.
>
>Ok, but someone claimed JPEG was NOT lossless, and that got me upset
>since I though it was the one with the most loss. Many people have
>argued with me on that one for years. So it claims to not be lossless
>in the doc's? Good. I'll have to quote it next time.

Yes, you will. Nothing like the facts, after all.

>>original, flip it's polarity, summ it with the compressed-uncompressed
>>version and you cannot sum to zero.
>
>How do you flip a files polarity, and then sum it? I'm curious.

Treat the file as a string of signed numbers, be they signed bytes, signed
shorts (in the case of 16 bit linear PACM audio data files) longs or
whatever. Read a file in, take the twos complement of each, and it's
"flipped" (the effect is identical to multiplying each value by -1).

There's a whole array of processing that can be done, such as truncation
and redither, resampling, and so on, simply by acting on the file data.

GMGraves

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
>The same phenomenon that convinced many people in the 1650s that there
_must_ be something to this witchcraft business because so many people had
experienced its effects.<

Ah, come on. That type of response is an insult to the inteligence of all
who find digital less than musically satisfying! We're not talking about a
few isolated crackpots here! were talking about many thousands of critical
listeners including some well known mainstream muscians and recording
personnel. How about Kieth Jarrett? Or Bob Ludwig?
If you can't hear the difference, then I suggest that perhaps you aren't
listening critically enough.

George Graves

GMGraves

unread,
Apr 5, 1995, 3:00:00 AM4/5/95
to
Actually, Dave Wilson of Wilson Audio (and WATT, PUPPY) fame has done an
even better experiment. He has recoded an anolog source to digital and
then on playback, has nulled-out the recorded sound by mixing an
inverse-phase version of the analog original with the digital playback.
What resulted was a fairly loud noise that seemed to follow the music
(which could still be heard faintly in the background, you can't 100% null
any complex analog signal) and sounded like an angry hornet's nest. The
noise was digital artifacts caused by clock signals, quantization error,
etc. Most unpleasant.

George Graves

Tim Takahashi

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
C.M. Hicks <c...@eng.cam.ac.uk> wrote:

[cut by rgd]

Both transcription and listening are done over my home system
using a modern turntable, a 3-mil speherical stylus and a homebrew
preamplifier with the appropriate equalization curves (250hz and
750hz bass turnover frequencies, various treble rolloff points).

The ONLY difference was the insertion of a recording device
into the train of events...

tim

Mark Brindle

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
GMGraves (gmgr...@aol.com) wrote:
: Actually, Dave Wilson of Wilson Audio (and WATT, PUPPY) fame has done an

This proves *absolutely nothing* about the perfection/imperfection
of digital recording. OTOH, it *does* point out the imperfection
of (existing) analog methods. As a thought-experiment, imagine the
same test done with a *PERFECT* recording machine (the technology
doesn't matter -- hypothetical perfection can be analog or digital).

Now, record "pass #1" of the "real" analog signal -- then play it
back 180 degrees out of phase with "pass #2" of the analog source.
What do you get? Lots of *RANDOM* noise, of course! The PERFECT
recording (by definition) is *100% identical* to analog "pass #1";
but, analog "pass #2" is *different* -- because some of the noise
components in the "real" playback chain are *RANDOM*.

What about the faint music in the background? Well, that's to be
expected too -- unless the analog playback machinery is capable of
reproducing the "authentic signal" with *absolute precision* from
pass to pass. It can't! It's a moot point as to whether you can
"100% null-out" a complex analog signal -- because there's NO WAY
to *EXACTLY* replicate a complex analog signal -- especially in
random, noisy, non-cryogenic, real world.

...can you "null-out" two LPs?

Mark

Ron Cole

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
JJMcF (jj...@aol.com) wrote:
: The basic questions here seem to be (1) Do you believe that the current
: digital recording systems have audible artifacts and (2) can these be
: minimized or eliminated by "upgrading" the digital system by adding bits
: or increasing sampling rate. The latter hope may be based on analog
: analogies--for example, tape recording has definite "tape" artifacts at 1
: 7/8 ips, but 15 ips tape has hardly any of these. Same for disc recording
: and so on. What we really need is listening comparisons with upgraded
: digital systems. Unfortunately, unlike tape or disc systems, these
: upgraded systems are largely in the hands of technicians and engineers who
: tend not to believe that the current systems in fact do have audible
: artifacts, and are unsympathetic to upgrading projects.

Well tape at 15ips is hardley artifect free but it is truly better than 1
7/8ips. Most of the "High End" analog mastering was done at 30ips 1/2
track on 1/2" tape. This makes a real nice master but it does eat up tape.

As to imcreaing the sample rate and the number of bits. Some recording
studios are in fact recording at 20 to 24 bits at 50 to 55 Khz. I sat in on
on a recent session that was 48 tracks 24 bits at 50Khz. They then
convert back to 44.1 16 bit when CD Mastering, to Dat.. Check out the CD
stores for 20bit masters.

[Moderator's Note: I wish I could buy 20 Bit MASTERS in *my* local CD
stores... RD]

Ron


Ghosh Amit

unread,
Apr 6, 1995, 3:00:00 AM4/6/95
to
While I like the idea of no compression etc, I have difficulty fully
enjoying the music on a Telarc recording of Vaughan-Williams' London
Symphony (Andre Previn and the LPO). Due to the amazing dynamic range
on this CD, I find the quiet passages barely audible. Turn up the
volume you say.... and when the loud passages appear, my room shakes!
And, no, my room is not the problem. It is therefore difficult to
reconcile this problem of wide dynamic range and volume.

Amit Ghosh
gh...@math.okstate.edu

Chuck Ross

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
In article <3m2isa$7...@netnews.upenn.edu>, Ghosh Amit
<gh...@hardy.math.okstate.edu> wrote:

I've had many similar experiences with Telarc discs with very wide dynamic
range, and can say that one has to be extremely careful with some of
these. ONe would like to get 'natural' sounding levels on most of the
music, but when some of the full orchestral stuff lets loose, it can
easily damage speakers and crossovers, etc. One example is the frightening
bass drum rolls in the final track of "Eine Straussfest", which can easily
blow up speakers.

Another disc I've found to be actually unplayable (at least one cut), is
Reference Recording's "Pomp & Pipes", the last track of which, near the
end, has an incredible swell of level at very low frequency, full
orchestral plus big organ pipes. I've had no trouble listening to this on
headphones, but with speakers, it always makes me run for the gain
control...the speakers start "wobbling" even ad moderate levels. It makes
me wonder if the CD could have been overmodulated somehow.

So, wide dynamic range is not always a good thing...I have yet to hear a
system that can reproduce the dynamics of a wide-range live performance.
Maybe judicious level-riding might be a good thing for recording
companies?

Chuck

Bob Katz

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
Ghosh Amit <gh...@hardy.math.okstate.edu> wrote:
>
> While I like the idea of no compression etc, I have difficulty fully
> enjoying the music on a Telarc recording of Vaughan-Williams' London
> Symphony (Andre Previn and the LPO). Due to the amazing dynamic range
> on this CD, I find the quiet passages barely audible. Turn up the
> volume you say.... and when the loud passages appear, my room shakes!
> And, no, my room is not the problem. It is therefore difficult to
> reconcile this problem of wide dynamic range and volume.
>
> Amit Ghosh
> gh...@math.okstate.edu

Does this mean you are in favor of asking the few remaining record
companies who don't limit their dynamic range to compress their music
for you?

Personally, I'd rather have uncompressed, wide dynamic range music
well recorded, and on the occassions when that is too much (e.g.
disturbs neighbors, concentration, or my amplifier clips) the audio-
phile consumer could insert a "compressor", or dynamic range reducer
in his system. That way, the consumers who enjoy wide dynamic range
would not suffer.

Bernhard Muller

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
In a previous posting, Ghosh Amit (gh...@hardy.math.okstate.edu) writes:

> While I like the idea of no compression etc, I have difficulty fully
> enjoying the music on a Telarc recording of Vaughan-Williams' London
> Symphony (Andre Previn and the LPO). Due to the amazing dynamic range
> on this CD, I find the quiet passages barely audible. Turn up the
> volume you say.... and when the loud passages appear, my room shakes!
> And, no, my room is not the problem. It is therefore difficult to
> reconcile this problem of wide dynamic range and volume.

In a live concert, the orchestra occasionally plays so softly as to be
barely audible. The need on the part of the audience to hear causes a
tension intended by the composer. My advice is to set the volume
control for an appropriate loudness during the loud passages, and then
you work to hear the soft ones, as intended.

bern muller

Gabe Wiener

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
GMGraves <gmgr...@aol.com> wrote:

>>The same phenomenon that convinced many people in the 1650s that there
>>_must_ be something to this witchcraft business because so many people had
>>experienced its effects.

>Ah, come on. That type of response is an insult to the inteligence of all
>who find digital less than musically satisfying!

Hardly insulting at all. It is a perfect testimonial to the phenomenal
leaps of logic that people make.

When CDs first came out, some people complained about the unnatural
high frequencies of brass. "Oh," the common argument went, "up there
at the highest end of the spectrum, we only have two samples per
cycle! That can't *possibly* be enough since EVERYBODY knows that in
digital all you do is connect the dots." They were spectacularly
wrong about why early digital high frquencies sounded harsh.

Today we see a much more subtle view of the same phenomenon: people
who claim that since they perceive the effects, the cause must be a
flaw manifesting itself in digital theory.

Both then and now, the only reason these come up is that so many
people have a total lack of desire to understand *why* so many
recordings sound bad: poor microphone technique, bad production,
el-cheapo A/D converters, improper handling of the digital signal
after conversion, the list goes on.

>We're not talking about a
>few isolated crackpots here! were talking about many thousands of critical
>listeners including some well known mainstream muscians and recording
>personnel. How about Kieth Jarrett? Or Bob Ludwig?

Ask Bob how many CDs he masters, and whether he's happy with what he gets.
I think you'll find that he is.

>If you can't hear the difference, then I suggest that perhaps you aren't
>listening critically enough.

Now look who's being insulting? I can hear a difference, and in almost
all cases, I prefer good digital to good analog. Only when I want the
signal-processing effect of analog will I forego digital's accuracy.

You might wish to learn that people can listen critically and still
find digital recording a more desirable format. I do.

--
Gabe Wiener Dir., Quintessential Sound, Inc. |"I am terrified at the thought
Recording-Mastering-Restoration (212)586-4200 | that so much hideous and bad
PGM Early Music Recordings ---> (800)997-1750 | music may be put on records
ga...@panix.com http://www.panix.com/~gabe | forever." --Sir Arthur Sullivan

Richard D Pierce

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
GMGraves (gmgr...@aol.com) wrote:
> Actually, Dave Wilson of Wilson Audio (and WATT, PUPPY) fame has done an
> even better experiment. He has recoded an anolog source to digital and
> then on playback, has nulled-out the recorded sound by mixing an
> inverse-phase version of the analog original with the digital playback.
> What resulted was a fairly loud noise that seemed to follow the music
> (which could still be heard faintly in the background, you can't 100% null
> any complex analog signal) and sounded like an angry hornet's nest. The
> noise was digital artifacts caused by clock signals, quantization error,
> etc. Most unpleasant.

The experimental procedure itself is hopelessly flawed, and these
procedural error alone could lead to precisly the kind of results he got,
even with a straight pass-through.

Here's what such a null test requires (be it with analog or digital
equipment):

1. It requires that there is both 0 time delay difference between the
source and device under test, or at least a time difference less than
the reciprocal of twice the bandwidth. Thus, at 20 kHz, the source
and DUT signals MUST be synced to better than 25 uSec. If there is
ANY delay between the two, it will show up as precisely the signal
he's describing. This, in fact, is an effect of the resulting
comb filtering generated used in some recordings, called "flanging".
The smaller the difference in time, the higher the frequency where
the first null occurs.

2. It requires that the phase vs frequency response of the source and
D.U.T. be matched to within simialr sorts of errors. Failure to
do so leads to a similar problem as "1.

3. The broadband levels MUST be matched to each other within a resolution
equal to the least significant bit or the dynamic range, as the
case may be. That means, for example, if you want to null out a system
with, say, 96 dB of dynamic range, your attenuator must be good to 1
part in 65535. That corresponds to the need to adjust gains to match
within 0.00013 dB (that's 1/8000 of a dB)! I challenge Dave Wilson,
or ANYONE else to be able to set a gain that closely.

To make matters worse, if you are performing the test on an analog
machine, with a dynamic range of, say, 66 dB, your level matching
requirement is relaxed by a factor of about 40! Therefore, it's
FAR easier to get a match with an analog system simply because
effects of the mismatch will be buried in the overall higher noise
floor.

4. The frequency response must similarily be matched to within the same
sorts of limits. Where, one might ask, can you find two of ANYTHING
matched to within 0.00013 dB in frequency response?

In short, I have absolutely NO doubts that Mr. Wilson heard the effects
he did. I would not even attempt such a flawed experiment knowing full
well the impossibility of obtaining a minimum null neglecting real
difference. His data clearly suggests that he was getting results due to
procedural errors that far exceed the real difference that exist in the
digital case.

I suggest you or he or anyone else try the SAME expriment on the very
best analog deck you can find, and simply try to find a null, period.
It's impossible, because of the four requirements above. Such a method is
blind to ALL sources of error, real and procedural. It will, for example,
show GROSS effects due to simple miniscule frequency response error, very
slight delay errors, level matching difficulties, phase anomolies (even
minor ones) and so on.

The only sane conclusion that can be drawn from such data given the
difficulties is that the experiment is very broken.

Gabe Wiener

unread,
Apr 7, 1995, 3:00:00 AM4/7/95
to
In article <3lve4d$8...@agate.berkeley.edu>, GMGraves <gmgr...@aol.com> wrote:
>He has recoded an anolog source to digital and
>then on playback, has nulled-out the recorded sound by mixing an
>inverse-phase version of the analog original with the digital playback.
>What resulted was a fairly loud noise that seemed to follow the music
>(which could still be heard faintly in the background, you can't 100% null
>any complex analog signal) and sounded like an angry hornet's nest. The
>noise was digital artifacts caused by clock signals, quantization error,
>etc. Most unpleasant.

While I have a lot of respect for Dave (and own many of his products),
I must say that *if* he ran this experiment (I have no certain knowledge
that he did), it is a rather ridiculous one.

How can you get two identical playbacks from an analog deck? Analog
output will vary with temperature and humidity for one. For another,
the tape will shed minute amounts of oxide each time it's played.
Third, there will be differing amounts of RF interaction each time.

The result here is that even if you could somehow make a perfect
recording of an analog playback, you could never get that same
playback again....not accurately enough to null-cancel them. Try
doing this with two LPs, two reel-to-reel tapes, whatever. In all
instnaces, you'll get random noise.

IN FACT, the only time you will get complete cancellation between
two different copies of the same program is when cancelling digital
sources!

Gabe Wiener

unread,
Apr 8, 1995, 3:00:00 AM4/8/95
to
Tim Takahashi <t...@me.rochester.edu> wrote:

>Both transcription and listening are done over my home system
>using a modern turntable, a 3-mil speherical stylus and a homebrew
>preamplifier with the appropriate equalization curves (250hz and
>750hz bass turnover frequencies, various treble rolloff points).

Actually 3-mil is going to be WAY too big for a great many discs out
there. Styli in the 2.3-2.6 mil TE seem to do a good job on most 78s.

>The ONLY difference was the insertion of a recording device
>into the train of events...

This experiment tells us absolutely nothing, because we have no idea:
a) whether you set the gain structure correctly
b) whether the tape deck's input amplifer wasn't saturating from
subsonics
c) whether the tape transport was stable and not fluttering
need I go on?

One can capture 78 sound to analog tape quite well and still keep that
"78 sound" if one wants it.

Aaron J. Grier

unread,
Apr 8, 1995, 3:00:00 AM4/8/95
to
Richard D Pierce <DPi...@world.std.com> wrote:

> Thisnis all, once again, to address the myth that somehow, because it is
> continuous and not discrete, analog MUST, therefore, have "infinite"
> resolution. That is, that an analog system can represent faithfully
> infinitely small changes in signal. Nothing could be farther from the
> truth. In our prefectly realistic analog tape example above, you might be
> able to put in a signal of 0.531230967 volts, but what you will get out
> IS 0.531230967 +- 0.000356 volts, with the actual value of the error
> complete unpredictable according to noise, so the BEST you can say is
> that you are certain that you got something between 0.530874967
> and 0.5313586967, you don't know what. All of the "resolution" that
> existed below 0.000356v is simply lost and unrecoverable. You might as
> well call it 0.531 +- 0.000356

Question: is the +- error for any point in time totally independent of the
+- error for any other point in time? Since sound is time dependent, if
the errors correlate, then the difference in voltages over the difference
in time would still be the same regardless of the size of error. Also, the
errors you put forth are also worst case scenarios, which nobody will ever
practically encounter, and this gets me to thinking on which system
(analogue or digital) truly has the most random error. (Kind of
redundant. Hehehe...) You also said that error on analogue increases
depending on amplitude, and perhaps this attributes to the analogue
'warmth'. I have no idea. Maybe I'll go into audio engineering. :-)

The Finn / VLA
Aaron J. Grier
agr...@reed.edu (other addresses will be forwarded to this one.)

Gabe Wiener

unread,
Apr 8, 1995, 3:00:00 AM4/8/95
to
>[Moderator's Note: I wish I could buy 20 Bit MASTERS in *my* local CD
>stores... RD]

Many record companies would gladly provide them if there were some
sort of playback decice to listen to them on!

Unless people want to spend $25,000+ on a Nagra-D, there isn't too much
out there where the media is cost effective. Not yet anyway.

Ron Cole

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
[quoted text deleted by RD]

: >Dick Pierce wrote:
: > >ALL compression routines lose data? ALL of them, John? NO MATTER WHAT
: > >FORMAT? Care to back that up with hard evidence? For example, you are
: > >asserting that if I take an audio file (the actual sample length and
: > >sample rate is completely up to you) and compress it with PKZIP,

I have done this and the resulting ZIIP file was larger than the audio
SND file I started with. This was a 44.1Khz 16 Bit Stereo File on a IBM-PC.

Ron


Richard D Pierce

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
In article <3m745j$8...@geraldo.cc.utexas.edu>,

Aaron J. Grier <agr...@reed.edu> wrote:
>Richard D Pierce <DPi...@world.std.com> wrote:
>
>> Thisnis all, once again, to address the myth that somehow, because it is
>> continuous and not discrete, analog MUST, therefore, have "infinite"
>> resolution.
[... much removed to preserve moderator sanity ]

>> and 0.5313586967, you don't know what. All of the "resolution" that
>> existed below 0.000356v is simply lost and unrecoverable. You might as
>> well call it 0.531 +- 0.000356
>
>Question: is the +- error for any point in time totally independent of the
>+- error for any other point in time? Since sound is time dependent, if
>the errors correlate, then the difference in voltages over the difference
>in time would still be the same regardless of the size of error.

IF the signal is appropriately band limited (as is the case for
real-world analog and digital systems), then the error described is the
(average, peak, rms) error (depending upon how you specified it to begin
with), but it is time invariant IF you can insure the phase error and
group delay erors are 0, something that is trivial to do in digital
systems and effectively impossible to do in analog systems.

>Also, the
>errors you put forth are also worst case scenarios, which nobody will ever
>practically encounter, and this gets me to thinking on which system
>(analogue or digital) truly has the most random error. (Kind of
>redundant. Hehehe...)

Wrong, the errors I describe are BEST CASE errors.

>You also said that error on analogue increases
>depending on amplitude, and perhaps this attributes to the analogue
>'warmth'. I have no idea. Maybe I'll go into audio engineering. :-)

It certainly could account for a substabntial part of the difference.

Bob Myers

unread,
Apr 9, 1995, 3:00:00 AM4/9/95
to
GMGraves (gmgr...@aol.com) wrote:
> Ah, come on. That type of response is an insult to the inteligence of all
> who find digital less than musically satisfying! We're not talking about a

> few isolated crackpots here! were talking about many thousands of critical
> listeners including some well known mainstream muscians and recording
> personnel. How about Kieth Jarrett? Or Bob Ludwig?
> If you can't hear the difference, then I suggest that perhaps you aren't
> listening critically enough.

But George, you last sentence is itself an insult to all those who
ARE listening critically and who either find no significant difference
or who prefer the sound of digital equipment. This group ALSO
includes many thousands of listeners and a similar number of audio
professionals. Do you really want this to wind up as an endless
series of quotations from experts on both sides of the fence?

Tell me how your last statement above is any less offensive than "if
you prefer the sound of analog, then clearly you are deluding yourself
or simply prefer the sound of some pretty major distortions...", hmmmm?
Once more - we CANNOT argue about personal preferences. We CAN discuss
objective data and theory, and what many of us are objecting to
are misrepresentations in these areas, NOT your personal preferences.

Bob Myers | "One man's theology is another man's belly laugh."
my...@fc.hp.com | - Lazarus Long/Robert A. Heinlein
|

Bob Myers

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
GMGraves (gmgr...@aol.com) wrote:
> While I agree with your assesment of the nature of analog vs digital, I
> have to disagree with your conclusions. THEORETICALLY digital quantization
> comes much closer to accurately representing the original sound than would
> analog, but in my opinion, in actual real world terms, it does not. I use
> my ears and they say that digital does not serve the music well. We are
> also stuck with a system that was conceived in what was really the infancy
> of digital signal processing. While we can play with filters and amplifier
> stages and jitter and the like, we are still stuck with a standard,
> conceived in the 1970's, which responds far less to refienment than does a
> strictly analog record and playback system.

What the above boils down to is "I like the sound of analog systems better."
Fine; that's your right, and I wouldn't dream of trying to talk you out
of what is clearly your personal preference. However, it should be equally
clear that this preference is NOT by any means to be considered evidence
that analog systems are inherently superior to digital in terms of objective
accuracy. I would also be very interested in hearing what limitations
inherent in the digital system "conceived in the 1970's" that you believe
are responsible for it responding "far less to refinement", and the reasons
you believe this. If you've got those, then it should also be possible
to define what improvements you feel this system needs.

Bob Myers | "The difference between science and the fuzzy subjects
my...@fc.hp.com | is that science requires reasoning, while those other
| subjects merely require scholarahip." - R. Heinlein

Bob Myers

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
GMGraves (gmgr...@aol.com) wrote:
> Actually, Dave Wilson of Wilson Audio (and WATT, PUPPY) fame has done an
> even better experiment. He has recoded an anolog source to digital and

> then on playback, has nulled-out the recorded sound by mixing an
> inverse-phase version of the analog original with the digital playback.
> What resulted was a fairly loud noise that seemed to follow the music
> (which could still be heard faintly in the background, you can't 100% null
> any complex analog signal) and sounded like an angry hornet's nest. The
> noise was digital artifacts caused by clock signals, quantization error,
> etc. Most unpleasant.

Good Lord, there are so many holes in this I don't know where to
begin.

How was the "inverse-phase" version of the analog signal derived and
mixed in? How was it verified that its amplitude exactly matched that
of the signal coming out of the DAC, without running it through something
that very likely affected the signal itself? How was the conclusion that
this noise was indeed due to "digital artifacts" verified, or was this
just this experimenter's unsupported hypothesis? How well was the original
signal filtered prior to digitizing, and how were the two sources
maintained in phase for this experiment? (This sounds like either
distortion coming out of the mixing process or aliasing, for one thing...)

What's wrong with the other classic experiment? In an otherwise
all-analog system, switch an A/D->D/A chain in and out of the playback
path. See if a group of listeners can tell when the digital circuits
are "in" via a double-blind test. If they can't, then the process of
digitizing the signal is NOT having any signficant audible impact.

Bob Myers KC0EW Hewlett-Packard Co. |Opinions expressed here are not
Workstations Systems Div.|those of my employer or any other
my...@fc.hp.com Fort Collins, Colorado |sentient life-form on this planet.

gdg...@vnet.ibm.com

unread,
Apr 10, 1995, 3:00:00 AM4/10/95
to
I've been following this thread for a while from a distance... (long
sigh) Time to step in again, on that ever-flammable (and ever-long-
running!) topic, of analog and digital...

ga...@panix.com (Gabe Wiener) on Fri, 07 Apr 95 22:12:25 EDT wrote:

>GMGraves <gmgr...@aol.com> wrote:

>>>The same phenomenon that convinced many people in the 1650s that there
>>>_must_ be something to this witchcraft business because so many people had
>>>experienced its effects.

>>Ah, come on. That type of response is an insult to the inteligence of all


>>who find digital less than musically satisfying!

>Hardly insulting at all. It is a perfect testimonial to the phenomenal


>leaps of logic that people make.

Sorry, Gabe, I respect you but IMO, you're only half-right here. While
correct on the specifics here, I think you're missing the tone (no pun
intended) of what Mr Graves is trying to communicate. Yes, it is unfair
for him to generalize his experiences of digital to others. But OTOH,
it's also unfair to attack him in turn for having such experiences and
saying so.

Mr W continues:

>When CDs first came out, some people complained about the unnatural
>high frequencies of brass. "Oh," the common argument went, "up there
>at the highest end of the spectrum, we only have two samples per
>cycle! That can't *possibly* be enough since EVERYBODY knows that in
>digital all you do is connect the dots." They were spectacularly
>wrong about why early digital high frquencies sounded harsh.

Yes, the people who gave that reason _were_ wrong about the specifics.
However, the point I would like to make is that they were _NOT_ always
wrong about their perceptions. Perhaps there are more diplomatic, accurate,
less argumentative ways to inform audio engineers of that harshness.
The challenge, of course, would be to let the recording engineer who
captured that awful sound know, without getting him defensive or just
plain angry :-)

>Today we see a much more subtle view of the same phenomenon: people
>who claim that since they perceive the effects, the cause must be a
>flaw manifesting itself in digital theory.

Well... you back someone who wants to say "this doesn't sound right"
into a corner - not only do they have to identify the audible flaws,
but also provide reasons (and cure too, if possible) for them. Now, I
recognize that you audio veterans are tired of hearing the same old
arguments and reasoning over and over again (some days, I wonder how
the Wise Old Bear, Dick Pierce, can stand facing his keyboard during
these threads :-)) But think about your "harsh brass" example - how
would you have determined the nature and impact of that problem?
Supposed the measurer of the brass sound decided that the brass was
"accurate", since, in certain acoustically-poor halls, that's what
the brass sounds like? What could one say to the hapless listener -
"It sounds that way live, too - quit yer bellyachin' and leave if
you don't like it?"

>Both then and now, the only reason these come up is that so many
>people have a total lack of desire to understand *why* so many
>recordings sound bad: poor microphone technique, bad production,
>el-cheapo A/D converters, improper handling of the digital signal
>after conversion, the list goes on.

(Sigh) Yes, but... how can a listener evaluate the technology, if not
by its results? After all, many of the same problems (except for the
A/D issues) can occur in analogue - especially the miking and product-
ion pieces. I know this isn't the engineers' fault, but remember, the
marketing folks, for better or worse, early on presented digital as
(that dread phrase) "perfect", with little explanation to the wider,
non-recording, 'end-user' public as to the advantages that the record-
ing engineer, mixer, masterers et al can gain from the technology when
properly executed.

Also, without explanation, the powers-that-be (labels and distributors)
quickly deleted LPs from catalogues, stopped producing and selling them,
so that the A/B comparisons, the evaluation of alternatives some would
like to have had, could not take place. Suddenly, all we had were CDs,
and "of course" they were better. (Makes one wonder about those few who
buy the limited edition new and reissue LPs, doesn't it?) Yes, I know
again it's not 'your' fault, but some background context for such a
long-running and deeply held feud may be helpful.

Besides, if we _all_ knew what it takes to do great recordings, you'd
have a lot more competition :-)

In a separate post on the same thread, my...@hpfcla.fc.hp.com (Bob Myers)
writes:

>GMGraves (gmgr...@aol.com) wrote:

>> Ah, come on. That type of response is an insult to the inteligence of all
>> who find digital less than musically satisfying! We're not talking about a

>> few isolated crackpots here! ......

>But George, you last sentence is itself an insult to all those who
>ARE listening critically and who either find no significant difference
>or who prefer the sound of digital equipment. This group ALSO
>includes many thousands of listeners and a similar number of audio
>professionals.

Gentlemen, gentlemen, CALM, please! Flame extinguishers at the ready!
Is it ever possible to discuss analog and digital without either side
ready to draw pistols and duel?

Mr Myers continues:

> ...... Do you really want this to wind up as an endless


>series of quotations from experts on both sides of the fence?

Hate to say so, but the nature of this newsgroup often seems to _force_
such proofs and counter-proofs, once we go past opinion, to whatever
levels of justification and possible explanation that each poster
deems necessary.

>Tell me how your last statement above is any less offensive than "if
>you prefer the sound of analog, then clearly you are deluding yourself
>or simply prefer the sound of some pretty major distortions...", hmmmm?

Actually, in the past on this newsgroup, people have said THAT, too.
So I don't believe that any one side's hands can always 'stay clean'.
I've always wanted to apologize to jj and also Dick Pierce after one
or two such rounds I've been involved with here on r.a.h-e. in the
past, due to what I've learned in the sometimes-heated discussion process.
I also have the impression that the quality of the equipment and source
that engineers have access to and use for evaluation can be better
than what we mere consumers often get - such as 48K sampled DAT
masters, before being reduced to the 44.1K CD standard, for example.

>Once more - we CANNOT argue about personal preferences. We CAN discuss
>objective data and theory, and what many of us are objecting to
>are misrepresentations in these areas, NOT your personal preferences.

Hmm, seems to me that the real sub-text of these ongoing debates is
what IS 'objective data and theory', and how we discuss it in an
educational way, rather than merely fighting about it. Yes, I think
Mr Graves' method of argument has been harsh as well, but the responses
to what he says remind me of the old retort "'For example' is not proof".

I'm not _really_ in an argumentative mood - at last I've gotten my CDs
to sound good, if sometimes different than my LPs!

Regards, Geoff Gray, c/o IBM Corporation * Standard disclaimers apply.

Lon Stowell

unread,
Apr 11, 1995, 3:00:00 AM4/11/95
to
In article <3m40qc$i...@agate.berkeley.edu> ckr...@mcs.com
(Chuck Ross) writes:

>I've had many similar experiences with Telarc discs with very wide
>dynamic range, and can say that one has to be extremely careful with
>some of these. ONe would like to get 'natural' sounding levels on
>most of the music, but when some of the full orchestral stuff lets
>loose, it can easily damage speakers and crossovers, etc. One example
>is the frightening bass drum rolls in the final track of "Eine
>Straussfest", which can easily blow up speakers.

That Telarc doesn't seem to have as much capability for damage as
Chiller, Time Warp, and some of their other special effect recordings.

If you have weak speakers, possibly you might damage them on a Telarc,
but for sheer damage I've done more with test records and Digital
Domain. Even then, the only time I've ever damaged anything was a
well over comfortable listening levels--EXCEPT for melting down an
Apogee with a Krell and Jesu Joy of Man's Desiring on Bachbusters.

I personally consider Telarc's little yellow warning about "damage to
all but the finest equipment" to be little more than hype. Except for
some ribbon and electrostatics, it really is difficult to harm
speakers with music signals--even Telarc's.

>Another disc I've found to be actually unplayable (at least one cut),
>is Reference Recording's "Pomp & Pipes", the last track of which,
>near the end, has an incredible swell of level at very low frequency,
>full orchestral plus big organ pipes. I've had no trouble listening
>to this on headphones, but with speakers, it always makes me run for
>the gain control...the speakers start "wobbling" even ad moderate
>levels. It makes me wonder if the CD could have been overmodulated
>somehow.

Details. WE WANT DETAILS. >:-)

Try the Saint Saens "Organ" sometime, or some of the Cesar Franck and
Messiaen (sp?) chorales.

>So, wide dynamic range is not always a good thing...I have yet to
>hear a system that can reproduce the dynamics of a wide-range live
>performance.

Odd, I would trash any system that couldn't meet or exceed the dynamic
range of a live performance.

>Maybe judicious level-riding might be a good thing for recording
>companies?

Or judicious volume riding would be a better idea for audiophiles
whose listening environments, equipment, or ears don't like the full
dynamic range that a good digital source can reach?

There are dynamic range compressors available for such situations,
even though some consider them heresy, I would suggest that a GOOD
compressor would be better than compressing the recording--that way
if you want the full realism you can have it.

Werner Ogiers

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
GMGraves (gmgr...@aol.com) wrote:

: even better experiment. He has recoded an anolog source to digital


: and then on playback, has nulled-out the recorded sound by mixing an

This test seems to be flawed:

-how did he synchronise both recordings to within a fraction of a 20kHz
period?

-the noise is the analog recording is stochastic. I.e. the actual noise
signal of the analog source differs from the noise it produced during
the transscription to digital
=> the noise won't be nulled.

: What resulted was a fairly loud noise that seemed to follow the
: music

What is 'loud', compared to the original signal? -40dB? -60dB -80dB?
-96dB?

--
Werner Ogiers IMEC, division MAP
phone: +32 (0)16 281 556 Kapeldreef 75
fax: +32 (0)16 281 501 B-3001 Leuven
e-mail: ogi...@imec.be Belgium

Bernhard Muller

unread,
Apr 12, 1995, 3:00:00 AM4/12/95
to
In a previous posting, Lon Stowell (lsto...@pyramid.com) writes:

> In article <3m40qc$i...@agate.berkeley.edu> ckr...@mcs.com
> (Chuck Ross) writes:

>>Another disc I've found to be actually unplayable (at least one cut),
>>is Reference Recording's "Pomp & Pipes", the last track of which,
>>near the end, has an incredible swell of level at very low frequency,
>>full orchestral plus big organ pipes. I've had no trouble listening
>>to this on headphones, but with speakers, it always makes me run for
>>the gain control...the speakers start "wobbling" even ad moderate
>>levels. It makes me wonder if the CD could have been overmodulated
>>somehow.

> Details. WE WANT DETAILS. >:-)

The piece is an organ rendition of Weinberger's Fugue from Schwanda the
Bagpiper. It does have plenty of (clean) bass. But the performance os so
pompous and turgid, it is not fun to listen to except to feel your room
shake. But I do play it for friends all the time... :-)

--
We can choose to Throw Stones
to Stumble over them
Bern Muller to Climb over them
or to Build with them.--William Arthur Ward

Lon Stowell

unread,
Apr 13, 1995, 3:00:00 AM4/13/95
to
In article <3m96f4$t...@agate.berkeley.edu> rdc...@crl.com (Ron Cole) writes:
>I have done this and the resulting ZIIP file was larger than the audio
>SND file I started with. This was a 44.1Khz 16 Bit Stereo File on a IBM-PC.

The topic was whether or not compression routines result in loss
of data. If you have a archival type compression/decompression
tool which results in even the loss or changing of a single bit
in a googolbyte in a compress/restore cycle, it is BUSTED--PERIOD!!!!

This is the story for computer-data routines, not the lossy type
routines used for audio and video coding in some schemes.

Format-changes, such as converting GIF to EPS, PCX, etc. are not
simple compression. Some loss of resolution and data is inevitable
in any such conversion.

Whether or not any given compression routine results in more bits
in the compressed file than in the original file is an entirely
different issue. For answers, you really need to continue the
discussion on the technically oriented computer groups. This is
not at all unknown, for example if you send a pre-compressed
file over a modem connection with MNP4/5, don't be surprised
if the file takes longer than if you turn MNP 4/5 off. The
efficiencies of compression algorithms are governed by mathematical
laws--as are the results when they are applied to any given type
of data patterns.


It is loading more messages.
0 new messages