So I decided to write a short little article to make it all clear.
It's a little longer than 'short', and it took me way longer than I
thought it would, but at least it's done and hopefully it's clear.
You can see it at
http://www.wescottdesign.com/articles/Sampling/sampling.html.
If you're new to this stuff, I hope it helps. If you're an expert and
you have the time, please feel free to read it and send me comments or
post them here.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Posting from Google? See http://cfaj.freeshell.org/google/
"Applied Control Theory for Embedded Systems" came out in April.
See details at http://www.wescottdesign.com/actfes/actfes.html
> I've seen a lot of posts over the last year or so that indicate a lack
> of understanding of the implications of the Nyquist theory, and just
> where the Nyquist rate fits into the design of sampled systems.
>
> So I decided to write a short little article to make it all clear.
>
> It's a little longer than 'short', and it took me way longer than I
> thought it would, but at least it's done and hopefully it's clear.
>
> You can see it at
> http://www.wescottdesign.com/articles/Sampling/sampling.html.
>
> If you're new to this stuff, I hope it helps. If you're an expert and
> you have the time, please feel free to read it and send me comments or
> post them here.
>
Very nice. Should be distributed to universities so the kids learn some
real stuff.
Re "3.2 Nyquist and Signal Content": If Nyquist would have listened to
some of today's content (digital radio etc.) he'd have said that it
ain't worth sampling it :-)
I like the wording "line in the sand". Isn't that how Archimedes started
studying his circles? They didn't need any white board with the smelly
marker pens.
--
Regards, Joerg
>I've seen a lot of posts over the last year or so that indicate a lack
>of understanding of the implications of the Nyquist theory, and just
>where the Nyquist rate fits into the design of sampled systems.
>
>So I decided to write a short little article to make it all clear.
>
>It's a little longer than 'short', and it took me way longer than I
>thought it would, but at least it's done and hopefully it's clear.
>
>You can see it at
>http://www.wescottdesign.com/articles/Sampling/sampling.html.
>
>If you're new to this stuff, I hope it helps. If you're an expert and
>you have the time, please feel free to read it and send me comments or
>post them here.
Pretty good. My only quibble is the various statements about what
Nyquist said and didn't say.
http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem#Historical_background
"Exactly how, when, or why Nyquist had his name attached to the
sampling theorem remains obscure. The first known use of the term
Nyquist sampling theorem is in a 1965 book[4]. It had been called the
Shannon Sampling Theorem as early as 1954[5], but also just the
sampling theorem by several other books in the early 1950s."
It was actually Shannon (among others) that did the sampling theorem;
Nyquist made an observation. Your bibliography doesn't cite either of
them. It's probably correct to use "Nyquist rate" but not "Nyquist
theorem."
John
Nyquist published his paper about the minimum required sample rate in
1928. Shannon was a kid of 12 years back then. The paper wasn't about
ADCs or sampling in today's sense but about how many pulses per second
could be passed through a telegraph channel of a given bandwidth.
> If you're new to this stuff, I hope it helps. If you're an expert and
> you have the time, please feel free to read it and send me comments or
> post them here.
>
"2.1 Aliasing
By ignoring anything that goes on between samples the sampling process
throws away information about the original signal. This information
loss must be taken into account during system design."
This seems like something of an oversimplification. If the orginal
signal is naturally or otherwise bandwidth-limited to well below
2x the sample rate, there may not be any useful information available
to throw away and the loss may not have to be taken into account
during system design.
I'm going to go off and do some web searching; in the mean time do you
have any URLs that point to the seminal papers?
I can squirm out of that objection:
If you have the continuous-time signal, then you _know_ there's nothing
of note above Fs/2. If you don't have the continuous-time signal, then
you _can't_ know there's nothing of note above Fs/2, unless the sampled
signal train comes with a Certificate of Limited Bandwidth.
Later on in the article I talk about signals that are, indeed,
sufficiently bandlimited by their nature, and the fact that you probably
don't want to do any explicit anti-alias filtering in such a case.
The wiki article has links to Nyquist's 1928 paper and to Shannon's
1949 paper. Few EEs that I've met have ever read either of them, and
have absorbed "the Nyquist theorem" mostly by hearsay, which I guess
is your point.
John
>I like the wording "line in the sand". Isn't that how Archimedes started
>studying his circles? They didn't need any white board with the smelly
>marker pens.
Don't you go knocking whiteboards. A big board with a nice fresh set
of markers will multiply my IQ by about 1.3 or so. Sand doesn't work
nearly as well.
Oh well, back to the drawing board, literally.
John
> I'm going to go off and do some web searching; in the mean time do you
> have any URLs that point to the seminal papers?
>
This one could be a start, written by my old communications theory
professor (he was actually one of the really nice profs):
http://www.ee.technion.ac.il/courses/044130/00755459.pdf
><snip>
>The wiki article has links to Nyquist's 1928 paper and to Shannon's
>1949 paper. Few EEs that I've met have ever read either of them, and
>have absorbed "the Nyquist theorem" mostly by hearsay, which I guess
>is your point.
I've read Shannon's paper, thorougly. At least, these: The Bell
System Technical Journal, Vol. 27, pp. 379–423, 623–656, July,
October, 1948. Note that this is NOT 1949. But it was, in fact, what
made me understand Boltzmann much better than before and allowed me to
better access the underlying meaning of macro concepts such as
temperature and entropy (which have no micro-meaning.)
Hamming, Shannon, and Golay all worked together in the same place, if
I recall, around that time. Marcel J. E. Golay, 1949, and a little
(short) paper called, "Notes on Digital Coding" which came out in
1949. (I think he was pushed into it by Hamming and Shannon.)
Jon
IMHO. I could be wrong. I have never seen a formal proof just some
handwaving.
In article <UaOdnVhcfetF6IDY...@web-ster.com>, Tim Wescott
Exccept it was Shannon that said it...
--
Posted via a free Usenet account from http://www.teranews.com
Some more info and a correction.
First, the correction.
Golay didn't actually with Shannon at Bell labs. His first paper, the
one I mentioned called "Notes on Digital Coding" was actually
published in the Correspondence section of Proc. I.R.E., 37, 657
(1949) was written while he was at the Signal Corps Engineering
Laboratories in Fort Monmoth, N.J.
Now, the additional info.
Golay's 1949 paper is supplemented by two more papers he wrote:
"Binary Coding" I.R.E. Trans. Inform. Theory, PGIT-4, 23-28 (1954) --
written also from Fort Monmoth, NJ; and "Notes on the Penny-Weighing
Problem, Lossless Symbol Coding with Nonprimes, etc.," I.R.E Trans.
Inform. Theory, IT-4, 103-109 (1958) -- written from Philco
Corporation when in Philadelphia, Pa.
Twelve of Shannon's papers (1948 through 1967) are conveniently
collected in the anthology, "Key Papers in the Development of
Information Theory," edited by David Slepian (IEEE Press, 1974.)
Shannon referenced Golay's 1949 paper in the book "The Mathematical
Theory of Communication" (written with Warren Weaver, Univ. Illinois
Press, 1949.) This book contains a slightly rewritten version of
Shannon's first 1948 papers together with a popular-level paper by
Weaver.
Shannon describes the Hamming-7 code in his 1948 papers in section 17,
attributed to Hamming there, but since there is no reference to a
specific paper by Hamming I suspect this reference must have been via
personal communication with Hamming. (Golay also refers to the
Hamming-7 code in Shannon's first paper.)
The first paper by Hamming is "Error Detecting and Error Correcting
Codes" Bell System Tech. J., 29, 147-160 (1950.) Note this is
actually _after_ Shannon's reference to Hamming's code. The
anthology, "Algebraic Coding Theory: History and Development," edited
by Ian F. Blake (Dowden, Hutchinson & Ross, 1973) includes this paper.
Blake says in his introduction to the first 9 papers in his anthology:
"The first nontrivial example of an error-correcting code appears,
appropriately enough, in the classical paper of Shannon in 1948. This
code would today be called the (7,4) Hamming code, containing 16 = 2^4
codewords of length 7, and its construction was credited to Hamming by
Shannon. Golay gives a construction that generalizes this code over
GF (p), p a prime number, of length (p^n -1)/(p - 1) for some positive
integer n. Hamming also obtained the same generalization of his
example of codes of length (2^n - 1) over GF(2) and investigates their
structure and decoding in some depth. The codes of both Golay and
Hamming are now designated as Hamming codes. The interest of Golay
was in perfect codes, which have also been called lossless, or
close-packed, codes. Since he mentions the binary repetition codes
and gives explicit constructions for his remarkable (23,11) binary and
(11,6) ternary codes, it is not stretching a point to say that in the
first paper written specifically on error-correcting codes, a paper
that occupied, in its entirety, only half a journal page, Golay found
essentially all the linear perfect codes which are known today. ...
The multiple error-correcting perfect codes of Golay, now called Golay
codes, have inspired enough papers to fill a separate volume."
By the way, the Hamming-7 code can be used to generate the E7 root
lattice, which corresponds to the E7 Lie algebra. And the Hamming-8
code similarly generates the E8 root lattice, corresponding to the E8
Lie algebra. Heterotic superstring theory has an E8 x E8 symmetry
which is needed for anomaly cancellation. It is nifty that the very
first error-correcting codes of Golay and Hamming play such a profound
role in modern superstring theory.
An interesting supplemental work is from J. H. Conway and N. J. A.
Sloane, "Sphere Packings, Lattices and Groups" from Springer-Verlag,
1988. But probably the best ever book on algebraic coding theory is:
"The Theory of Error-correcting Codes" by F. J. MacWilliams and N. J.
A. Sloane, North-Holland Publishing Co., 1977.
Jon
>Some more info and a correction.
You guys are good.
I doubt I'll ever read all that myself, but I do appreciate the work and
motivation it took to take the time to care and find it all.
I'll only say that I hope all of you all keep your enthuiasm to dig,
find the best answer, and share.
It's why I hang out here, read, and snipe occasionally.
As Martha said, even back before she became a capitalistic criminal,
"It's a good thing."
Tim,
I really like your style of writing. Accessible, and yet with no hint
at the "for dummies" craze.
I hadn't really thought of the Signal-to-Aliased-Energy ratio as a
metric for signal quality before. It will be a lot easier to discuss
the
quality of sampled systems with that as a tool. Very good
presentation of the both the concept and the effect. I whish
it was me who thought of that sequence of figures 6-9...
oh well.
A couple of very subjective comments:
- There is something about the typography of the page that annoys me.
I can't really see if there is an open line between paragraphs inside a
section; if there is, you may want to make it clearer
- The footnote indicators are WAY too small to be useful, or even seen.
- I can't see why you need the maths paragraph ("To understand
aliasing...", eqs. (2),(3)) at all in section 2.1. I think that what
you
say there is covered by the plain-text parts of the section. Lay-men
might find that maths paragraph scary, throwing them off your article,
while technicians know it already.
- Similar comments apply for eq. (4). I can't see that it is strictly
necessary, the explanation is in the text anyway.
- Eqs. (5)-(7) seem to be necessary, but are the only big equations
in the article. Hmm... that makes me wonder...
As far as I can tell, you are 99.9% at the point where this is an
article a non-engineer layman can read and understand. If you
find a way to get rid of equations 1-7, maybe even eq. 8, you
ought to be there.
Impressive work!
Rune
No expert, but figure 14 could possibly be improved by explicitly showing
the rising edge of the pulse, maybe start the time axis at -0.5 or start
the pulse at +0.5
The "contact us" link appears to be broken too.
--
Bye.
Jasen
Very good. I would change this phrase:
"This will allow you to use a more open anti-aliasing filter."
By "more open", I gather you mean a larger transition ratio, which
lowers the Q of the resonators.
Note that the Bessel filter will ring at higher orders. I don't have my
copy of Zverev handy, but I think the Bessel rings at 4th order and
higher. The Gaussian filter doesn't ring at any order. The key is to
look at the impulse response of the filter. If it ever goes negative,
then filter will ring.
You might want to go into the inverse sinc response requirements in the
recontruction filter.
>> http://www.wescottdesign.com/articles/Sampling/sampling.html.
>As far as I can tell, you are 99.9% at the point where this is an
>article a non-engineer layman can read and understand. If you
>find a way to get rid of equations 1-7, maybe even eq. 8, you
>ought to be there.
Since understanding the formula _notation_ is not essential for
understanding most of the rest of the text, I would suggest moving the
formulas into separate boxes and moving these boxes out of the direct
text flow e.g. into a box in the right margin of the paper.
>Impressive work!
Definitively !
The problem with quite a few text dealing with sampling is that they
are written by mathematicians for the mathematicians. However, these
days, most people using various DSP algorithms are programmers, not
mathematicians, thus the terse mathematical notation can be hard to
understand to them, especially without too much numerical analysis
background.
Instead of using the terse mathematical notation, it might be more
productive for most readers to publish e.g. a Fortran/Algol/C
algorithm.
Paul
> Note that the Bessel filter will ring at higher orders. I don't have my
> copy of Zverev handy, but I think the Bessel rings at 4th order and
> higher.
Are you sure about that? Here is a 9th order 1 MHz Bessel for LTspice.
(Save as a CKT file)
* UTS Mike Monett
* Converted From Micro Cap Source file to LTspice
*
C1 0 1 248.3PF
C2 0 2 1200.0PF
C3 0 3 2007.3PF
C4 0 4 2749.9PF
C5 0 Vout 7209.4PF
L1 1 2 1.8UH
L2 2 3 4.1UH
L3 3 4 5.9UH
L4 4 Vout 8.6UH
R1 Vin 1 50
R5 0 Vout 50
V1 Vin 0 DC 0 PULSE (0 1 0 0 0 2.5e-006 5e-006)
.TRAN 1e-008 10u 0 1n UIC
.PRINT TRAN V(VOUT) V(VIN)
.PLOT TRAN V(VOUT) V(VIN)
.PROBE
.END
;$SpiceType=SPICE3
It doesn't ring.
Regards,
Mike Monett
Antiviral, Antibacterial Silver Solution:
http://silversol.freewebpage.org/index.htm
SPICE Analysis of Crystal Oscillators:
http://silversol.freewebpage.org/spice/xtal/clapp.htm
Noise-Rejecting Wideband Sampler:
http://www3.sympatico.ca/add.automation/sampler/intro.htm
Tim Wescott wrote:
> I've seen a lot of posts over the last year or so that indicate a lack
> of understanding of the implications of the Nyquist theory, and just
> where the Nyquist rate fits into the design of sampled systems.
>
> So I decided to write a short little article to make it all clear.
>
> It's a little longer than 'short', and it took me way longer than I
> thought it would, but at least it's done and hopefully it's clear.
>
> You can see it at
> http://www.wescottdesign.com/articles/Sampling/sampling.html.
>
> If you're new to this stuff, I hope it helps. If you're an expert and
> you have the time, please feel free to read it and send me comments or
> post them here.
>
I just had to stop reading about half way through. The writing style and
organization is structurally incoherent and comes across as desultory
gibberish with intolerably sloppy terminology and perspective. On the
upside, it should be somewhat beneficial to the low caliber types who
are confused by the subject matter.
Tim,
I like the article and totally agree that there's a lot of smoke and
mirrors
and misuse of sampling theorem ideas. I would definitely point
newcomers to your description.
Please don't take the following as flaming.
But I just can't resist throwing in a couple of my favourite pet
peeves :-(
I echo the sentiments of an earlier comment about
fs >= 2fmax
versus
fs > 2fmax
If you look at e.g. Oppenheim & Wilsky, Signals & Systems pp 519 they
clearly state in terms of proper greater than. However, if you look at
Shannon&Weaver "The Mathematical Theory of Communication" P 86 "Band
limited ensembles of functions" it's less clear (verbage rather than
math): they say (more or less) "let f(t) contain no frequencies over W.
Then you can sample at 2W". This would imply the geater-than or equal,
which I suspect is incorrect.
In fact, in your article you point out that you cannot sample a sine
wave at exactly 2*f unless you know the phase, which clearly
contradicts
the ">=" interpretation. (Actually you need the phase and the
amplitude).
I'm not entirely sure of the history but I suspect that the
band-limiting can actually be traced much father back to Fourier
analysis in mid 1800's.
In the same vein, I've thought that one could do a neat write-up of how
to reconstruct from the sampling of sin(2*pi*f*s) at a frequency
approaching (but not equal to) 2f. If you describe how
the almost-aliased-to-DC low frequency samples (regardless of phase)
get
turned back into a full amplitude signal through the magic of sinx/x
reconstruction, it would be very cool. The samples would (where phase
approached one side) be almost perfect (+1,-1,+1,-1) but as the phase
drifted the other way, the envelope would attenuate (+0.0001, -0.0001,
etc). Through the magic of summing an infinite series, you get your
full-size plain old sine wave back. But that might be more math than
the scope of this article.
I also would propose a change to sec 3.3 about repetitive signals.
There is really no difference between this example and 3.4 - band
limited signals! The only reason that the powerline sampling in sec 3.3
works is that you're talking about a bandlimited "modulation" signal
that's modulating a 60Hz carrier! So it's not really the case that
"sometimes you can sample slower than the Nyquist Thm would
indicate"...
this is a little misleading. I'd introduce it more like "some classes
of signals, which appear to be fast, are actually low bandwidth,
they're just a slowly
modulated carrier, so you get a low Fs". (I'm not being precise but I
hope
you get my drift).
Cheers,
- Kenn
Hello Mike,
It seems you have overlooked that LTspice assumes a certain rise and fall
time
if you specify t_rise=0 or t_fall=0.
V1 Vin 0 DC 0 PULSE (0 1 0 0 0 2.5e-006 5e-006)
LTspice assumes in this case trise = fall = 10%*Twidth = 250ns
This is the reason why you see no overshoot.
Now we use fast edges.
V1 Vin 0 DC 0 PULSE (0 1 0 1n 1n 2.5e-006 5e-006)
You will see about 1.6x% overshoot. By the way, I am not sure
how precise the choosen component values are to get the ideal Bessel
filter response. Bessel filters have indeed overshoot in the step response.
Best regards,
Helmut
>> mi...@sushi.com wrote:
>> It doesn't ring.
>> Regards,
>> Mike Monett
> Hello Mike,
> Best regards,
> Helmut
Hi Helmut,
Thanks for taking a look at this, and for pointing out that LTspice
changes the rise and fall times if you do not specify the values. I
am still learning how LTspice works, and these unexpected variations
from other SPICE programs can be major pitfalls.
In this case, the amount of overshoot is very small, and I'm not
sure if the 1.6% is not caused by the way the component values are
rounded.
I use this filter program for general filter design. The component
values were calculated for the theoretical values and the inductors
were rounded to 2 significant digits. This reflects the difficulty
in obtaining precision inductors, particularly at VHF and UHF.
The capacitors were rounded to tenths of a pf. This is because the
program is used mostly for work at VHF and UHF, where the caps are
much smaller. It may well be that using the correct theoretical
values for the components may reduce or eliminate the small amount
of overshoot you measured.
However, there is still the practical matter of obtaining inductors
and capacitors with the needed precision, and any practical filter
will have imperfect components. I have found the Bessel and
Equiripple to be much more tolerant of errors in component values
than other filter types, and still give good performance.
If we are concerned about such small values of overshoot, we need to
repeat this using the correct theoretical values for the components.
Even so, an overshoot of 1.6% is not in the same category as the
overshoot from Butterworth or other non-linear phase filters, which
may have ten percent or more, depending on the sharpness of the
cutoff. In practise, the Bessel and Equiripple are considered linear
phase filters, and have negligible overshoot.
For example, a Butterworth or other nonlinear group delay filter may
cause excessive timing errors when used to filter digital data. This
can be minimized by changing to a Bessel or Equiripple filter.
So practically speaking, the Bessel and Equiripple distinguish
themselves from other filters by the constant group delay through
the filter bandpass, and the low or non-existant overshoot.
It is just me or did anyone else get a strong sense of deja-vu?
-Dave
--
David Ashley http://www.xdr.com/dash
Embedded linux, device drivers, system architecture
Finally! Someone who sees my writing the way I do! I think we're in
the minority, though.
I don't think of the intended audience as "low caliber", though -- just
people who've had a different educational background, and whose
back-brains don't light up with equations when they see a fountain, or a
near miss on the freeway.
I have to say that this paper rather turned into a monster while I was
writing it. I thought it was going to be between 2000 and 3000 words,
with almost no math and very little real work. Instead it's about 5500
words, and I've got about a man-week into all those pretty charts and
graphs (I should publish an appendix with "the making of..." along with
all the math underneath).
As a reaction to this I haven't done my usual stage of letting it rest
and getting back to it -- I was afraid I'd never do the "getting back to
it" step. Instead I've put it outside without giving it time to get
it's coat and boots on. If I can figure out how to tighten it up I
certainly will -- assuming that I don't run away screaming at the
thought of doing even _more_ work on it.
> Tim Wescott wrote:
>
>>I've seen a lot of posts over the last year or so that indicate a lack
>>of understanding of the implications of the Nyquist theory, and just
>>where the Nyquist rate fits into the design of sampled systems.
>>...
>
>
> It is just me or did anyone else get a strong sense of deja-vu?
>
> -Dave
>
Yes, I did write essentially that at the head of a posting soliciting
suggestions for the article -- and you all helped.
(note to self -- add an acknowledgments section)
Bessel filters do have a small overshoot, which decreases at higher orders.
2nd .43%
4th .84%
6th .64%
8th .34%
10th .06%
note the *very low* value for higher orders. In fact the frequency response
gets hardly better then.
The Gauss filter has indeed zero ringing, but a much larger transition band.
--
ciao Ban
Apricale, Italy
You really need to look at theory versus what spice predicts. The input
is an ideal step. As you do the convolution, the negative region of the
impulse response subtracts from the result, making the signal drop in
value, then the postive regions make the result increase in value,
hence ringing.. The Gaussian impulse response is always positive, hence
the convolution output can't decrease as time increases.
Linear phase alone is not enough to stop ringing.
The discrete time situation is easier to understand, especially if you
consider a finite impulse response filter. The response of a FIR filter
to a step input is the the sum of the tap coefficients from one to N.
That is, the first output is the first tap. The second output is the
sum of the first two taps, etc. If no tap is negative, then the output
always rises, hence no ringing.
No circuits were simulated in writing this post. ;-)
[snip]
>
>No circuits were simulated in writing this post. ;-)
Bwahahahaha! ROTFLMAO!
...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona Voice:(480)460-2350 | |
| E-mail Address at Website Fax:(480)460-2142 | Brass Rat |
| http://www.analog-innovations.com | 1962 |
I love to cook with wine. Sometimes I even put it in the food.
Ban,
Thanks for the clarification. In most references, the Bessel is considered
to have low, negligible, or no overshoot, especially when compared to
Butterworth, Chebyshev and other types of filters. Your numbers confirm
this.
In practise, it is difficult to obtain the exact component values needed
for the theoretical performance. Not only are the values non-standard, but
it may be difficult to get the inductor "Q" values used in most
calculations. So we can assume there will be some deviation from the
theoretical performance, and the overshoot will probably increase slightly.
However, it is still low enough to be difficult to measure, and the terms
"low", "neglible" or "no overshoot" are quite descriptive.
> You really need to look at theory versus what spice predicts. The
> input is an ideal step. As you do the convolution, the negative
> region of the impulse response subtracts from the result, making
> the signal drop in value, then the postive regions make the result
> increase in value, hence ringing.. The Gaussian impulse response
> is always positive, hence the convolution output can't decrease as
> time increases.
> Linear phase alone is not enough to stop ringing.
> The discrete time situation is easier to understand, especially if
> you consider a finite impulse response filter. The response of a
> FIR filter to a step input is the the sum of the tap coefficients
> from one to N.
> That is, the first output is the first tap. The second output is
> the sum of the first two taps, etc. If no tap is negative, then
> the output always rises, hence no ringing.
Please see Ban's post and my reply. Most references describe the
Bessel as having low, negligible, or no overshoot. Ban's numbers
show a theoretical overshoot of less than 1%, decreasing with higher
order. In any practical LCR filter, the Bessel will have such low
overshoot as to be difficult or impossible to measure. So it doesn't
make sense to split hairs between <1% and zero overshoot.
DSP is a quite different story, and you can realize filters that are
impossible in real life. But it would be difficult to implement DSP
at higher frequencies.
Well a few neurons were tortured trying to recall all that theory. No
wait, make that extraordinarily rendered.
There are differences between analog and digital filters. Digital means an
approximation of the desired characteristic in the passband, but the poles
and zeros are modified to compensate for the modulation effects. aliasing is
always happening, but it can be reduced below the noise floor with the
analog input filter.
Linear phase has a very undesirable side effect, it rings *before and after*
the step, supposed to be more audible.
And Audio is a very forgiving métier, because the ears themselves function
as reconstruction filters, suppressing all those high frequency artifacts..
If you want to display the signal with a digital scope or ECG, you better
start with 5 to 10 times the upper frequency rolloff, look at the specs
there, nobody even considers Nyquist adequate, exept programmers having no
idea of reality.
Nyquist has a lot to say about how far you *can* go and
little to say about how far you *should* go. Those of us
who are forced to push the Nyquist limits may have a better
appreciation of this.
For example, compare the frequency response of a 5-pole IIR
Butterworth low-pass filter designed to roll off at 1 KHz.
with one designed to roll off at 10 KHz., both filters
having been designed around a sampling frequency of 50 KHz.
Things start to get nasty above Fs/10 and the harder you
push Nyquist, the worse it gets.
Nyquist says you can put a sharp +/- 500 KHz analog filter
around a 21.4 MHz IF and sample it at 2.14 MHz. What
Nyquist doesn't say is what that does to your
signal-to-noise ratio. TANSTAAFL!
Nyquist is often used by slick, hand-waving charlatans to
over-sell their capabilities without having a clue about the
real trade-offs involved. (And, I'll bet Nyquist didn't say
*that*, either.)
Ban wrote:
>
> And Audio is a very forgiving mйtier, because the ears themselves function
> as reconstruction filters, suppressing all those high frequency artifacts..
And audio is a very unforgiving matter, because the distortions of the
audio amp are quickly growing towards the high frequencies. You won't
hear the high frq artifacts as they are, but you will very well hear the
result of the nonlinear distortion of those.
Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
> Don't you go knocking whiteboards. A big board with a nice fresh set
> of markers will multiply my IQ by about 1.3 or so.
But only with the smelly markers. The water-based stuff doesn't do it
for me.
robert
> In the first sentence of your papere, you say that if the signal is band
> limited to fo or less, then a sample frequency of 2fo or more is adequate to
> contain completely all the information necessary to recreate the signal. My
> understanding from water cooler conservation is that the signal bandwidth must
> be strictly less than fo for a sample frequency of 2fo and that the signal
> must be infinite in extent to allow perfect reconstruction.
I see it that way as well. A pure sine signal of exactly f_0, sampled at
exactly 2f_0, might come out as all-zero, or as a pulse train with
alternating polarity and an amplitude of anything between 0 and V_p.
There is not enough information to reconstruct the signal in this case.
However, if you sample the same signal at 2f_0+f_e (f_e being very
small, think epsilon), it will come out as a pulse train with
alternating polarity and the amplitude modulated by a beat frequency
f_e. This would seem to be not enough information to reconstruct the
signal, but in fact that's not true: A f_0 sine is the only possible
input signal because the modulated pulse train contains frequency
components greater than f_0 which couldn't have been in the input signal
(which, as a prerequsitite to Nyquist, is brick-walled at f_0).
So, the way I see it is that the sampling frequency must be strictly
greater than the highest frequency in the input.
Thanks, Tim, for a great write-up on the subject!
robert
> I have to say that this paper rather turned into a monster while I was
> writing it. I thought it was going to be between 2000 and 3000 words,
> with almost no math and very little real work. Instead it's about 5500
> words, and I've got about a man-week into all those pretty charts and
> graphs (I should publish an appendix with "the making of..." along with
> all the math underneath).
>
> As a reaction to this I haven't done my usual stage of letting it rest
> and getting back to it -- I was afraid I'd never do the "getting back to
> it" step. Instead I've put it outside without giving it time to get
> it's coat and boots on. If I can figure out how to tighten it up I
> certainly will -- assuming that I don't run away screaming at the
> thought of doing even _more_ work on it.
Let me tell you that this article is so high-class from an educational
point of view that it /deserves/ coat and boots. Next time anybody comes
up to me and wants something explained about Nyquist I can just point
them at your page, and I'm sure many others will do likewise.
The only thing I'd want is the whole thing as a pretty, nicely printable
PDF document. But don't listen to me.
robert
Modulation effect? All you have are two different domains (s and z).
aliasing is
> always happening, but it can be reduced below the noise floor with the
> analog input filter.
> Linear phase has a very undesirable side effect, it rings *before and after*
> the step, supposed to be more audible.
Ring before the signal arrives? That sound non-causal to me.
For audio signal processing, the filters are generally active, so no
inductors are used. [speaker crossovers excepted.]
In many applications, ringing cannot be tolerated. Scales for instance.
>
>Ban wrote:
>> mi...@sushi.com wrote:
>> >
>> Linear phase has a very undesirable side effect, it rings *before and after*
>> the step, supposed to be more audible.
>
>Ring before the signal arrives? That sound non-causal to me.
>
ISTR Roger Lagadec at Studer (and Sony) pointing this out in the early
1980's, after listening tests on early digital systems with piano
music showed that something was amiss. Probably in AES and IEEE
archives.
martin
> Ring before the signal arrives? That sound non-causal to me.
Please read more carefully. The filter rings before the main part of the
output step *emerges* but after the step arrives at the input. The
filter's inherent delay makes that quite possible.
Jerry
--
"The rights of the best of men are secured only as the
rights of the vilest and most abhorrent are protected."
- Chief Justice Charles Evans Hughes, 1927
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Its only non-causal if it arrives before it was sent. :-)
There is no reason why the main signal should not arrive latter than
some of the crud which accompanies it. Its perfectly normal in a
dispersive medium.
Steve
Tim Wescott wrote:
> I've seen a lot of posts over the last year or so that indicate a lack
> of understanding of the implications of the Nyquist theory, and just
> where the Nyquist rate fits into the design of sampled systems.
>
> So I decided to write a short little article to make it all clear.
>
> It's a little longer than 'short', and it took me way longer than I
> thought it would, but at least it's done and hopefully it's clear.
>
> You can see it at
> http://www.wescottdesign.com/articles/Sampling/sampling.html.
>
> If you're new to this stuff, I hope it helps. If you're an expert and
> you have the time, please feel free to read it and send me comments or
> post them here.
>
May I ask what software you used to render the maths on that page? It
looks clearer than the stuff I produce. MathML is getting into browsers
now, but the rendering of that looks so bad with anything I have tried,
that inserted images in HTML pages still seems the only practical approach.
I'd still like to see a web page I can point people to when they say a
10kHz sine wave on a CD will come out as a square wave/triangular
wave/some other weird notion.
Steve
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Here is the line verbatim:
"Linear phase has a very undesirable side effect, it rings *before and
after*
the step, supposed to be more audible."
Nothing wrong with my reading. Now if you are somehow looking at the
output to interpret where the large transition occurred, that is a
different story. However, any filter where the impulse response goes
negative will have such ringing, be it linear phase or not. You need
to visualize the convolution.
Don't think of a single signal. An impulse (or a step) has a wide
range of frequency components. Some are delayed more than others
when passing through the filter.
--
Some informative links:
<news:news.announce.newusers
<http://www.geocities.com/nnqweb/>
<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/>
SciLab
>
> I'd still like to see a web page I can point people to when they say a
> 10kHz sine wave on a CD will come out as a square wave/triangular
> wave/some other weird notion.
Well, get writing!
>
> Steve
It's called *pre-ringing* and it appears because the chunks are processed
forward and backward in a row, so a unity pulse will have an identical
rising and falling edge. If the filter is of the ringing type, thus the
ringing occurrs twice.
You are right in saying it's impossible, but only in an analog world.
Digital filters do have a latency which will always be longer than the delay
of the corresponding analog filter; with linear phase it will be twice the
FIR size plus twice the conversion time and more than double than the analog
counterpart.
Well done analog filters are of the *minimum phase* type, having just the
lowest possible delay for that shape of output response. This is possible to
realize digitally with IIR filters only.
And do not think that even a Gauss filter has only positive
FIR-coefficients. This would be only true for a filter of infinite length,
which apparently isn't that desirable at all. For practicable sizes the
location of the poles and zeros has to be modified and one might get even
negative coefficients, depending on the ratio of sampling- and filter
frequency and filter length.
The Gaussian to which I refer is S domain. If you mapped it to Z
domain, it would have to be IIR, not FIR.
Even steep linear phase analogue filters will exhibit pre-ringing.
If you were to linearise the phase response of, say, a Butterworth
filter, by adding one or more all-pass sections, its impulse
repsonse will ring before and after the main output. Of course,
the overall delay must go up for the filter to remain causal.
Jeroen Belleman
>> It was actually Shannon (among others) that did the sampling theorem;
>> Nyquist made an observation. Your bibliography doesn't cite either of
>> them. It's probably correct to use "Nyquist rate" but not "Nyquist
>> theorem."
> Nyquist published his paper about the minimum required sample rate in
> 1928. Shannon was a kid of 12 years back then. The paper wasn't about
> ADCs or sampling in today's sense but about how many pulses per second
> could be passed through a telegraph channel of a given bandwidth.
(and be distinguished on the other end).
The important point being that the math is the same even though the
goal is different. I suppose, then, the sample rate should be
a lemma to Nyquist's telegraph channel theorem.
By the way, Gauss published the first paper on the FFT.
-- glen
...
> Well done analog filters are of the *minimum phase* type, having just the
> lowest possible delay for that shape of output response. This is possible to
> realize digitally with IIR filters only.
Minimum-phase (or nearly minimum) FIRs are possible, just not symmetric
FIRs. You can make maximum-phase FIRs too. Then *all* the ringing is on
the leading edge.
(Actually, Gauss never published it. It was only published
posthumously as part of his notes.)
> I've seen a lot of posts over the last year or so that indicate a lack
> of understanding of the implications of the Nyquist theory, and just
> where the Nyquist rate fits into the design of sampled systems.
> So I decided to write a short little article to make it all clear.
I like it.
As for section 1, for a periodic signal, or one that you only care
about over a finite time, you can (mathematically) sample perfectly in a
finite time. Realistically, quantum mechanics and the uncertainty
principle, in other words noise, will get to you.
The question of < or <= comes up often. There is zero probability
(that is, zero width) so it will never come up in real signals.
(Or consider jitter in the time base.)
Other than that, I think it is pretty good.
-- glen
> glen herrmannsfeldt wrote:
Then, Gauss wrote the first published paper on FFT?
If you want to put it that way, very few people publish
papers, they just send them to someone else to publish.
But yes, I had forgotten that.
-- glen
>
>>> By the way, Gauss published the first paper on the FFT.
>
>> (Actually, Gauss never published it. It was only published
>> posthumously as part of his notes.)
>
> Then, Gauss wrote the first published paper on FFT?
>
> If you want to put it that way, very few people publish
> papers, they just send them to someone else to publish.
>
We have to remember what means there were back in their days. Far fewer
journals with available space. No word processors. Very costly
type-setting process. Etc.
Even nowadays publishing isn't easy. I have done a few and the whole
process is quite laborious. However, we now have an excellent means of
publishing just about anything (legal) we want: The web. Everybody can
set up a web site and go ahead. Also, you can publish your ideas in
newsgroups just like this one. All that provides instant publication.
Gauss, Nyquist and others didn't have all this and I assume Shannon was
too far into retirement by then as well. AFAIR he passed away at old age
around five years ago.
--
Regards, Joerg
No. Gauss' work was not published until 1866, as a part of his
collected works. Prior to that, there were various authors who
published related algorithms (e.g. a paper by Everett in 1860, one
published by Archibald Smith in 1846, and one published by F. Carlini
in 1828, although these works only described restricted cases).
What does seem to be true is that Gauss was the first *recorded*
discoverer of an FFT. He was also (apparently) the only author until
Cooley & Tukey in 1965 to describe a general mixed-radix algorithm for
any composite size.
(See the excellent paper, "Gauss and the History of the Fast Fourier
Transform," by Heideman et al., IEEE ASSP Magazine, p. 14, October
1984.)
> If you want to put it that way, very few people publish
> papers, they just send them to someone else to publish.
You're being a bit too pedantic for my taste; by "publish" in science,
we usually mean "initiate the publication process".
Regards,
Steven G. Johnson
No. Gauss' work was not published until 1866, as a part of his
collected works. Prior to that, there were various authors who
published related algorithms (e.g. a paper by Everett in 1860, one
published by Archibald Smith in 1846, and one published by F. Carlini
in 1828, although these works only described restricted cases).
What does seem to be true is that Gauss was the first *recorded*
discoverer of an FFT. He was also (apparently) the only author until
Cooley & Tukey in 1965 to describe a general mixed-radix algorithm for
any composite size.
(See the excellent paper, "Gauss and the History of the Fast Fourier
Transform," by Heideman et al., IEEE ASSP Magazine, p. 14, October
1984.)
> If you want to put it that way, very few people publish
> papers, they just send them to someone else to publish.
You're being a bit too pedantic for my taste; by "publish" in science,