Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

WHAT IS 96X OVERSAMPLING?

210 views
Skip to first unread message

James Keivom

unread,
Nov 30, 1992, 2:27:37 PM11/30/92
to


I recently saw a cd player advertising 96x oversampling. what
exactly is the benefit in higher oversampling rates.

also is there a big sonic difference between 4 and 8 times over
sampling.

thanks

jim


Randall K. Smith

unread,
Dec 1, 1992, 11:14:54 PM12/1/92
to

Ah! Finally a question in rec.audio.high-end I can answer!

The need for oversampling arises from a combination of effects. The first
is the Nyquist sampling theorem, which states that you must sample a signal
at twice the frequency of the highest frequency in the signal. Thus, if
there's a 22 KHz signal there, one must sample at at least 44 KHz. So far,
so good. (I'm not sure if in the terminology of CDs, if this counts as 2x
oversampling. However, this really isn't important.) The next problem, as
I've been given to understand, is that if you have a 44 KHz sampling speed,
then you're going to end up with some residue of that frequency in your
output. Thus, an annoying high-pitched whine in your sound, which annoys
dogs and cuts down on CD player sales.

The answer to this is to put a filter on the output that is flat from 0-22KHz,
but drops to zero somewhere between 22 and 44 KHz, and stays there. Bingo!
No whine, dogs are happy and CD players can be sold to the masses. The next
problem is, of course, the filter. It is difficult to build a filter that
is absolutely flat and then cuts off rapidly. Flat is easy, a sharp cut off
is easy, try to do both and you induce "ringing" or rapid oscillations right
before the cut freqency. This causes big distortions in the signal, and
makes high-end-buyers think about sticking with their trusty analog stuff.

So, the solution is to 'oversample', which does nothing for the signal
quality--what's there is there, and what isn't, isn't. It does, however, move
the position of the induced whine to a higher frequency, and means that the
filter can be made to cut off much more slowly, since there is no frequency
in the signal between the fixed highest frequency of a CD and the sampling
frequency. This allows the filter to be flat where it needs to be, and still
cut off the unwanted stuff.

So, the difference betweenn 4x and 8x oversampling is that the filter circuit
is easier to make for 8x. The exact amount of oversampling needed is thus
a tradeoff, and something for audio engineers to worry about. It seems that
they've decided that something around 4-8 is about all you need. Thus, 96x
is more of an advertising gimmick than anything else.

Randall Smith
Dept. of Physics
rsm...@wisp4.physics.wisc.edu

James Buster

unread,
Dec 3, 1992, 1:03:28 AM12/3/92
to
In article <1fiegk...@uwm.edu> rsm...@wisp4.physics.wisc.edu (Randall K. Smith) writes:
>So, the difference betweenn 4x and 8x oversampling is that the filter circuit
>is easier to make for 8x. The exact amount of oversampling needed is thus
>a tradeoff, and something for audio engineers to worry about. It seems that
>they've decided that something around 4-8 is about all you need. Thus, 96x
>is more of an advertising gimmick than anything else.

Possibly. I saw an article in Audio about a 256x(?) oversampling circuit that
had no brick wall filter. The high sampling rate made such a filter
unnecessary. The reviewer felt that this circuit gave a far superior level
of fidelity than any of the 8x oversampling circuits he had heard, especially
in high frequency reproduction and dynamic range. He attributed this to the
lack of filtering in the circuit.
--
James Buster
bit...@netcom.com

Deryk Barker

unread,
Dec 3, 1992, 8:02:13 PM12/3/92
to
bit...@netcom.com (James Buster) writes:

I for one am still in the dark. I understand the Nyquist theorem, but
CDs are all(?) sampled at 44.1KHz, and the oversampling is a feature
of the playback, early Sony players, for example, used 2X and early
Philips 4x. But both played the same CDs. I always understood (to use
the word but vaguely) that 'oversampling' involved the playback
circuitry reading each sample several times (i.e. 2, 4, 8 etc.) and
then doing something with them: but what? Averaging them doesn't
really make sense, as they should all be the same.

Further enlightenment would be appreciated.

--
Real: Deryk Barker, Computer Science Dept., Camosun College, Victoria B.C.
Email: (dba...@spang.camosun.bc.ca)
Phone: +1 604 370 4452

Graham Allan

unread,
Dec 4, 1992, 4:49:56 PM12/4/92
to
In article <1fno5f...@uwm.edu> dba...@spang.Camosun.BC.CA (Deryk Barker) writes:
>I for one am still in the dark. I understand the Nyquist theorem, but
>CDs are all(?) sampled at 44.1KHz, and the oversampling is a feature
>of the playback, early Sony players, for example, used 2X and early
>Philips 4x. But both played the same CDs. I always understood (to use
>the word but vaguely) that 'oversampling' involved the playback
>circuitry reading each sample several times (i.e. 2, 4, 8 etc.) and
>then doing something with them: but what? Averaging them doesn't
>really make sense, as they should all be the same.
>
>Further enlightenment would be appreciated.

Early Philips CD players used 4x oversampling because they couldn't
(reliably, cheaply) produce 44.1kHz 16 bit D-A converters, but they could
produce 14 bit converters which ran at 4x the speed. An individual sample
would be reused 4 times, but the 2 bits which wouldn't 'fit through' the
D-A converter were fed back into the sample as a correction factor. It's
easier to try and give an example to show how it works:

Example sample = 10+A
LSB---^ ^--- 14 most significant bits

first oversample 10 A A -> D-A converter
second 01 A A ->
third 11 A A ->
fourth 00 (A+1) A+1 -> D-A converter

(the two least significant bits are added to each successive oversample)

I can't remember exactly what is then done with the analogue output, one
possibility would be to feed it through an R-C smoothing circuit I suppose.
Anyway, the overall result is you get apparent 16 bit resolution from a 14
bit converter.

Presumably the idea can be extended for higher orders of oversampling,
though you may not have actual data bits to feed back into the sample but
instead some intelligent interpolation between the actual 16 bit samples
created by the player somehow.

Graham al...@mnhep8.hep.umn.edu


A. J. Dean

unread,
Dec 4, 1992, 6:36:59 AM12/4/92
to
James Buster (bit...@netcom.com) wrote:

I've got an old Philips CD303, and have installed a little switch so I can
turn the oversampling (4*) on and off at will, and therefore all digital
filtering on and off too. I could swear that in some cases it sounded better
without the oversampling (which also reduces the DACs to 14 bit/no
dither...). Perhaps because the 22.050kHz+ alias components were getting
through a la legato link thing. However any results obtained from my home
built system have to be taken with a bucket of salt!

While on thye subject, does anyone know
1) How worthy the old CD303 should be of any respect? (sonically)
2) A good source for the Burr-Brown pin compatible replacement for the
NE5533? opamp used in the CD303, or isn't it even worth trying.
(mail order only since I am in New Zealand)

Oh, BTW (referring to included text) oversampling usually implies some sort
of digital filtering, as you usually want to create new samples in between
the actual samples that 'smooth out' the waveform. Though by simply zeroing
the in between samples, you can reduce the aperture effect (causes HF
response to drop 6dB at 22.050kHz) of a simple convert-and-hold no
oversampling dac (ie a 'staircase output'). The interpolation filter also
usually tries to correct for any aperture effect left and the
characheristics of the analogue output filter. Then you end up with an
'ideal' brick wall filter at 22.050kHz which has perfect phase response,
that is so perfect that no one even considers anything else when designing
it. (Listen? what is that? I thought that was the customer's job...)

Antony. (dea...@elec.canterbury.ac.nz)

Fong Kin Fui

unread,
Dec 7, 1992, 12:12:51 AM12/7/92
to

Wait a minute. I thought oversampling has got to do with
minimizing quantization noise. If memory doesn't fail me,
oversampling is called interpolation in digital signal
processing. It is done by adding samples of zero amplitude
in between the actual samples. The effect is that it seems
that you are sampling with more bits, hence less quantization
noise.

Oversampling is always 2^n. I'm not sure how the number 96 is
calculated.

Maybe someone in DSP can explain this in more detail.

Fui

Thomas W. Matthews

unread,
Dec 4, 1992, 12:30:29 PM12/4/92
to

I just read a posting by rsm...@wisp4.physics.wisc.edu:

>there's a 22 KHz signal there, one must sample at at least 44 KHz. So far,
>so good. (I'm not sure if in the terminology of CDs, if this counts as 2x
>oversampling. However, this really isn't important.) The next problem, as

Sampling at the minimum rate needed for reconstruction (the Nyquist rate)
is not called 2X oversampling even though the sampling rate is twice the
highest frequency to be reconstructed. The number of "times" oversampling
is referenced to the Nyquist rate.

>I've been given to understand, is that if you have a 44 KHz sampling speed,
>then you're going to end up with some residue of that frequency in your
>output. Thus, an annoying high-pitched whine in your sound, which annoys
>

>The answer to this is to put a filter on the output that is flat from 0-22KHz,
>but drops to zero somewhere between 22 and 44 KHz, and stays there. Bingo!

Actually, the problem is a bit tougher. The sampled signal not only contains
the origninal spectrum from 20 to 22 KHz, but also IMAGES of that spectrum,
part of which covers the range between 22 KHz and 44 KHz. So the reconstruction
filter must sharply cutoff around 22 KHz, because we don't want any of that image
comming through.

CD players don't "oversample" in the technical sense; they are only given
samples at the 44.1 KHz rate recorded on the disc. However, an oversampling
player generates extra samples between the orignial 44.1 KHz rate samples.
A 2x oversampling player generates one extra sample between each adjacent
pair of original samples, doubling the number of samples to resemble an
88.2 KHz sampling rate. When this is done the images of the original
spectrum I mentioned above spread apart so that the reconstruction filter
need not cut off so sharply. It should be pointed out that the process of
interpolating those extra samples is a filtering operation; oversampling
players still need sharp cutoff filters, but some of the filtering is done
in the digital domain which relaxes the constraints on the analog filter.

A to D converters can indeed oversample. They can sample the original
signal at rates that are faster than the minimum rate that Nyquist predicts.
Advantages of this are that filtering can once again be done in the digital
domain and that some noise shaping can be done.

I have oversimplified a lot here. I'm interested in hearing about the appications
of oversampling converters for audio and how they help relax constraints on the
analog anti-aliasing filters. Also, I'd like to know if anyone else agrees
with me that CD players really don't oversample.

Tom Matthews

Fong Kin Fui

unread,
Dec 7, 1992, 8:03:11 PM12/7/92
to
engp...@nuscc.nus.sg (Fong Kin Fui) writes:
: Oversampling is always 2^n. I'm not sure how the number 96 is
:
Someone corrected me. It seems that you can oversample at 96X.
Add 96 zeros in between samples. Sorry for the bit of
misinformation.

Fui

Struan Gray

unread,
Dec 8, 1992, 11:14:16 AM12/8/92
to
Deryk Barker writes (in reaction to stuff I've deleted):

>
> I for one am still in the dark. I understand the Nyquist theorem, but
> CDs are all(?) sampled at 44.1KHz, and the oversampling is a feature
> of the playback, early Sony players, for example, used 2X and early
> Philips 4x. But both played the same CDs. I always understood (to use
> the word but vaguely) that 'oversampling' involved the playback
> circuitry reading each sample several times (i.e. 2, 4, 8 etc.) and
> then doing something with them: but what? Averaging them doesn't
> really make sense, as they should all be the same.
>
> Further enlightenment would be appreciated

Don't confuse oversampling on ADCs (done to improve bit resolution) and
oversampling on DACs (done to help eliminate noise from the DAC itself).
The former is like taking an average of several readings and does affect
the accuracy of the information stored on the CD. The latter means that
said information can be converted back into analogue signals with fewer
artifacts, and has no effect on the information that the system is
*attempting* to present as music.

When a DAC chip does a conversion it dumps noise into the power rails
and the analogue ground. This will vary with the value to be converted,
but for argument's sake imagine that at the end of the conversion the
DAC chip causes a small voltage spike which, if left unattended, will
be amplified and appear out of your speakers.

The spike itself is probably too fast to be resolved by the speakers
or your ears, but don't forget that the chip is performing conversions
at nice, regular time intervals, and so the train of spikes forms a
waveform with a significant componant at the sampling (conversion)
frequency. If the chip had no oversampling this would be at 44.1 kHz,
a little high for the ear, but enough to bother tweeters and to create
noise at lower frequencies through intermodulation.

So it needs to be filtered. Because the problem is downstream of the
DAC this cannot be done with DSP, so an analogue filter must be used
which preserves the phase and amplitude of signals up to 20-odd kHz but
which kills this tone at 44.1. As many posts to this group have already
said this is hard to do - passband ripple and phase effects start to
mess about with the music. So the solution is to up the conversion rate
to a frequency well above those that you want to preserve, it is then
simple to design an analogue filter which kills the DAC's whistle without
badly affecting the DC-20 kHz audio band.

I don't know the details of how this is implemented. Because it is
easier to build a frequency doubling circuit than a frequency tripling
circuit the conversion frequency is usually multiplied by a power of
two. Whether the DAC converts the same digital number several times
or does clever lsb manipulations to reduce correlated noise I leave to
others to say. Similarly, if I've made a howler above, please let me
know (kindly :-).

Struan

Mark Tillotson

unread,
Dec 8, 1992, 12:34:39 PM12/8/92
to
(longish explanation of Nyquist sampling theorem and its impact by
rsm...@wisp4.physics.wisc.edu (Randall K. Smith) omitted)

Randall has the gist of the explanation, but it's even worse than
a question of eliminating the 44.1kHz tone. If you take a 44.1kHz
sampling rate and put the samples together naively (sample & hold
circuit say), then a 10kHz tone (say) will be reconstructed as:

10kHz tone (wanted)
44.1 kHz tone (breakthrough from the sampling clock)
44.1 +/- 10kHz aliases (unwanted)
88.2 +/- 10kHz aliases (unwanted)
.....
(oh yes, some quantisation noise (broadband) as well... and maybe
clock jitter)

Admittedly the alias frequencies are present at lower amplitude than
the wanted one, but still can be a source of problems by mixing with
the higher wanted audio frequencies (in the ear itself for instance!)
and thus generate audible ghost frequencies. Also some people's
hearing goes way above 22.05kHz and they can hear the aliases
directly! Basically you get the same effect as in a ring-modulator.

To filter out frequencies above 24kHz without significantly affecting
ones below 20kHz is very hard in analogue circuitry, requiring very
precise alignment of filter component values and regular
recalibration. This is really not feasible for home entertainment
equipment!!

Basically oversampling is the process of constructing a version of the
waveform at a higher sampling frequency (some multiple of the
original), and digitally synthesizing the extra samples (with a digital
low-pass filter). If the precision of the digital filter
implementation is good enough, you preserve (nearly) all of the
original 0..22kHz information but instead of alias frequencies centred
on 44.1kHz, 88.2kHz etc, an 8 x oversampling would move these to
352.8kHz, 705.6kHz etc, which can then be reduced with a relatively
simple and stable (and of high sonic quality I hope) analogue filter
after the D to A converters. There are neat tricks in the digital
domain to implement brick wall filters with linear phase
characteristics which helps.

One problem is that you need somewhat faster D to A converters to keep
up with higher sampling rates, and there maybe a slight increase in
quantization noise if the digital filter is not information-preserving.

My feelings are that 4x oversampling is probably sufficient, in the
sense that more effort should go into the converter and analogue side
of things rather than chasing diminishing returns in the digital
domain


------------------------------------------------------
|\ /| | , M. Tillotson Harlequin Ltd. \
| \/ | /\| |/\ |< ma...@uk.co.harlqn Barrington Hall,\
| | \_| | | \ +44 223 872522 Barrington, \
I came, I saw, I core-dumped... Cambridge CB2 5RG \


Fong Kin Fui

unread,
Dec 8, 1992, 8:48:28 PM12/8/92
to
engp...@nuscc.nus.sg (Fong Kin Fui) writes:
: engp...@nuscc.nus.sg (Fong Kin Fui) writes:
: : Oversampling is always 2^n. I'm not sure how the number 96 is
: :
: Add 96 zeros in between samples. Sorry for the bit of
^^
95
Oops!


A. J. Dean

unread,
Dec 9, 1992, 12:19:29 AM12/9/92
to
Thomas W. Matthews (matt...@eecs.ucdavis.edu) wrote:
: analog anti-aliasing filters. Also, I'd like to know if anyone else agrees

: with me that CD players really don't oversample.

I've wondered about that too, but apparently it goes like this...

A 'real' sampled signal is supposed to be just that - the original signal
multiplied by a regular 'string' of unit impulses, ie a sort of signal spike
at every sample and zero in between. This is what a digital signal is
supposed to represent anyway, and its also the way the DAC is supposed to
reconstruct it to get a flat response. So oversampling is the "oversampling"
of _this_ theoretical signal at another frequency, with zeros in between and
all. This oversampled (digital) signal can then be put through a DAC to get
a flatter frequency response (but still with imaging).

Of course this is a bit of a waste (for 4* oversampling you wouldn't utilise
the dac for 4 out of every 4 samples...) so interpolation is done, which
sets these (previously zero) samples to produce a good-looking
(good-sounding?) reconstructed waveform, which then doesn't have as much
imaging (spectral!), high frequencies are reduced above the audio band, and
therefore audio frequencies don't beat with them as much. Alternatively you
could say the oversampled signal passes through a low-pass filtering
operation but that is just terminology.

I noticed another post explaining how Philips gets 16 bits out of their 14
bit converters (noise shaping), which is exactly what my (old!) player does
- and it does - if I disable the 4* oversampling you can hear that 'digital'
noise on low level signals. It relies on the fact that the analogue low-pass
filter will average out elements of the signal which happen quickly.
Bitstream converters work on essentially the same principle mentioned,
except they are an extreme - creating over 16 bits of resolution out of a 1
bit stream of data at 255 times the 44.1kHz rate (philips). Actually, their
oversampling/noise shaping system assumes the analogue section will
integrate (low-pass filter of 6dB/octave with no finite cutoff frequencies)
the 1-bit signal, limiting the resolution and/or amplitude of high
frequencies. Essentially...

Antony

Thomas W. Matthews

unread,
Dec 14, 1992, 5:22:15 PM12/14/92
to

I thought I'd add a brief note.

When the 44.1 kHz. signal has its sampling rate increased by a factor
of four, it is as if three zeros are added between each sample.
However, these zeros are not output to the DAC. The signal is then
passed through a digital filter. The filter's response will assign
values to the samples that were previously zero. This is like
interpolation, but not straight-line interpolation. I describe the
effect as an interpolation that takes into account the upper limit of
the signal bandwidth.

Tom Matthews

0 new messages