Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Analog/Digital Distinction

5 views
Skip to first unread message

Stevan Harnad

unread,
Oct 27, 1986, 11:20:07 PM10/27/86
to

Steven R. Jacobs (utah-cs!jacobs) of the University of Utah CS Dept
has given me permission to post his contribution to defining the A/D
distinction. It appears below, followed at the very end by some comments
from me.
[Will someone with access please post a copy to sci.electronics?]

>> One prima facie non-starter: "continuous" vs. "discrete" physical processes.

>I apologize if this was meant to avoid discussion of continuous/discrete
>issues relating to analog/digital representations. I find it difficult
>to avoid talking in terms of "continuous" and "discrete" processes when
>discussing the difference between analog and digital signals. I am
>approaching the question from a signal processing point of view, so I
>tend to assume that "real" signals are analog signals, and other methods
>of representing signals are used as approximations of analog signals (but
>see below about a physicist's perspective). Yes, I realize you asked for
>objective definitions. For my own non-objective convenience, I will use
>analog signals as a starting point for obtaining other types of signals.
>This will assist in discussing the operations used to derive non-analog
>signals from analog signals, and in discussing the effects of the operations
>on the mathematics involved when manipulating the various types of signals
>in the time and frequency domains.
>
>The distinction of continuous/discrete can be applied to both the amplitude
>and time axes of a signal, which allows four types of signals to be defined.
>So, some "loose" definitions:
>
>Analog signal -- one that is continuous both in time and amplitude, so that
> the amplitude of the signal may change to any amplitude at any time.
> This is what many electrical engineers might describe as a "signal".
>
>Sampled signal -- continuous in amplitude, discrete in time (usually with
> eqully-spaced sampling intervals). Signal may take on any amplitude,
> but the amplitude changes only at discrete times. Sampled signals
> are obtained (obviously?) by sampling analog signals. If sampling is
> done improperly, aliasing will occur, causing a loss of information.
> Some (most?) analog signals cannot be accurately represented by a
> sampled signal, since only band-limited signals can be sampled without
> aliasing. Sampled signals are the basis of Digital Signal Processing,
> although digital signals are invariably used as an approximation of
> the sampled signals.
>
>Quantized signal -- piece-wise continuous in time, discrete in amplitude.
> Amplitude may change at any time, but only to discrete levels. All
> changes in amplitude are steps.
>
>Digital signal -- one that is discrete both in time and amplitude, and may
> change in (discrete) amplitude only at certain (discrete, usually
> uniformly spaced) time intervals. This is obtained by quantizing
> a sampled signal.
>
>Other types of signals can be made by combining these "basic" types, but
>that topic is more appropriate for net.bizarre than for sci.electronics.
>
>The real distinction (in my mind) between these representations is the effect
>the representation has on the mathematics required to manipulate the signals.
>
>Although most engineers and computer scientists would think of analog signals
>as the most "correct" representations of signals, a physicist might argue that
>the "quantum signal" is the only signal which corresponds to the real world,
>and that analog signals are merely a convenient approximation used by
>mathematicians.
>
>One major distinction (from a mathematical point of view) between sampled
>signals and analog signals can be best visualized in the frequency domain.
>A band-limited analog signal has a Fourier transform which is finite. A
>sampled representation of the same signal will be periodic in the Fourier
>domain. Increasing the sampling frequency will "spread out" the identical
>"clumps" in the FT (fourier transform) of a sampled signal, but the FT
>of the sampled signal will ALWAYS remain periodic, so that in the limit as
>the sampling frequency approaches infinity, the sampled signal DOES NOT
>become a "better" approximation of the analog signal, they remain entirely
>distinct. Whenever the sampling frequency exceeds the Nyquist frequency,
>the original analog signal can be exactly recovered from the sampled signal,
>so that the two representations contain the equivalent information, but the
>two signals are not the same, and the sampled signal does not "approach"
>the analog signal as the sampling frequency is increased. For signals which
>are not band-limited, sampling causes a loss of information due to aliasing.
>As the sampling frequency is increased, less information is lost, so that the
>"goodness" of the approximation improves as the sampling frequency increases.
>Still, the sampled signal is fundamentally different from the analog signal.
>This fundamental difference applies also to digital signals, which are both
>quantized and sampled.
>
>Digital signals are usually used as an approximation to "sampled" signals.
>The mathematics used for digital signal processing is actually only correct
>when applied to sampled signals (maybe it should be called "Sampled Signal
>Processing" (SSP) instead). The approximation is usually handled mostly by
>ignoring the "quantization noise" which is introduced when converting a
>sampled analog signal into a digital signal. This is convenient because it
>avoids some messy "details" in the mathematics. To properly deal with
>quantized signals requires giving up some "nice" properties of signals and
>operators that are applied to signals. Mostly, operators which are applied
>to signals become non-commutative when the signals are discrete in amplitude.
>This is very much related to the "Heisenburg uncertainty principle" of
>quantum mechanics, and to me represents another "true" distinction between
>analog and digital signals. The quantization of signals represents a loss of
>information that is qualitatively different from any loss of information that
>occurs from sampling. This difference is usally glossed over or ignored in
>discussions of signal processing.
>
>Well, those are some half-baked ideas that come to my mind. They are probably
>not what you are looking for, so feel free to post them to /dev/null.
>
>Steve Jacobs
>
- - - - - - - - - - - - - - - - - - - - - - - -

REPLY:

> I apologize if this was meant to avoid discussion of continuous/discrete
> issues relating to analog/digital representations.

It wasn't meant to avoid discussion of continuous/discrete at all;
just to avoid a simple-minded equation of C/D with A/D, overlooking
all the attendant problems of that move. You certainly haven't done that
in your thoughtful and articulate review and analysis.

> I tend to assume that "real" signals are analog signals, and other
> methods of representing signals are used as approximations of analog
> signals.

That seems like the correct assumption. But if we shift for a moment
from considering the A or D signals themselves and consider instead
the transformation that generated them, the question arises: If "real"
signals are analog signals, then what are they analogs of? Let's
borrow some formal jargon and say that there are (real) "objects,"
and then there are "images" of them under various types of
transformations. One such transformation is an analog transformation.
In that case the image of the object under the (analog) transformation
can also be called an "analog" of the object. Is that an analog signal?

The approximation criterion also seems right on the mark. Using the
object/transformation/image terminology again, another kind of a
transformation is a "digital" transformation. The image of an object
(or of the analog image of an object) under a digital transformation
is "approximate" rather than "exact." What is the difference between
"approximate" and "exact"? Here I would like to interject a tentative
candidate criterion of my own: I think it may have something to do with
invertibility. A transformation from object to image is analog if (or
to the degree that) it is invertible. In a digital approximation, some
information or structure is irretrievably lost (the transformation
is not 1:1).

So, might invertibility/noninvertibility have something to do with the
distinction between an A and a D transformation? And do "images" of
these two kinds count as "representations" in the sense in which that
concept is used in AI, cognitive psychology and philosophy (not
necessarily univocally)? And, finally, where do "symbolic"
representations come in? If we take a continuous object and make a
discrete, approximate image of it, how do we get from that to a
symbolic representation?


> Analog signal -- one that is continuous both in time and amplitude.

> Sampled signal -- continuous in amplitude, discrete in time...
> If sampling is done improperly, aliasing will occur, causing a
> loss of information.

> Quantized signal -- piece-wise continuous in time, discrete in
> amplitude.

> Digital signal -- one that is discrete both in time and amplitude...
> This is obtained by quantizing a sampled signal.

Both directions of departure from the analog, it seems, lose
information, unless the interpolations of the gaps in either time or
amplitude can be accurately made somehow. Question: What if the
original "object" is discrete in the first place, both in space and
time? Does that make a digital transformation of it "analog"? I
realize that this is violating the "signal" terminology, but, after all,
signals have their origins too. Preservation and invertibility of
information or structure seem to be even more general features than
continuity/discreteness. Or perhaps we should be focusing on the
continuity/noncontinuity of the transformations rather than the
objects?

> a physicist might argue that the "quantum signal" is the only
> signal which corresponds to the real world, and that analog
> signals are merely a convenient approximation used by mathematicians.

This, of course, turns the continuous/discrete and the exact/approximate
criteria completely on their heads, as I think you recognize too. And
it's one of the things that makes continuity a less straightforward basis
for the A/D distinction.

> Mostly, operators which are applied to signals become
> non-commutative when the signals are discrete in amplitude.
> This is very much related to the "Heisenburg uncertainty principle"
> of quantum mechanics, and to me represents another "true" distinction
> between analog and digital signals. The quantization of signals
> represents a loss of information that is qualitatively different from
> any loss of information that occurs from sampling.

I'm not qualified to judge whether this is an anolgy or a true quantum
effect. If the latter, then of course the qualitative difference
resides in the fact that (on current theory) the information is
irretrievable in principle rather than merely in practice.

> Well, those are some half-baked ideas that come to my mind.

Many thanks for your thoughtful contribution. I hope the discussion
will continue "baking."


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mi...@princeton.csnet
(609)-921-7771

Anders Weinstein

unread,
Oct 29, 1986, 11:37:55 AM10/29/86
to

> From Stevan Harnad:

>
>> Analog signal -- one that is continuous both in time and amplitude.
>> ...

>> Digital signal -- one that is discrete both in time and amplitude...
>> This is obtained by quantizing a sampled signal.
>
> Question: What if the
>original "object" is discrete in the first place, both in space and
>time? Does that make a digital transformation of it "analog"? I

Engineers are of course free to use the words "analog" and "digital" in their
own way. However, I think that from a philosophical standpoint, no signal
should be regarded as INTRINSICALLY analog or digital; the distinction
depends crucially on how the signal in question functions in a
representational system. If a continuous signal is used to encode digital
data, the system ought to be regarded as digital.

I believe this is the case in MOST real digital systems, where quantum
mechanics is not relevant and the physical signals in question are best
understood as continuous ones. The actual signals are only approximated by
discontinous mathematical functions (e.g. a square wave).

> The image of an object
>(or of the analog image of an object) under a digital transformation
>is "approximate" rather than "exact." What is the difference between
>"approximate" and "exact"? Here I would like to interject a tentative
>candidate criterion of my own: I think it may have something to do with
>invertibility. A transformation from object to image is analog if (or
>>to the degree that) it is invertible. In a digital approximation, some
>information or structure is irretrievably lost (the transformation
>is not 1:1).

> ...

It's a mistake to assume that transformation from "continuous" to "discrete"
representations necessarily involves a loss of information. Lots of
continuous functions can be represented EXACTLY in digital form, by, for
example, encoded polynomials, differential equations, etc.

Anders Weinstein

Stevan Harnad

unread,
Oct 29, 1986, 3:28:06 PM10/29/86
to

[Will someone with access post this on sci.electronics too, please?]

Anders Weinstein <princeton!cmcl2!harvard!DIAMOND.BBN.COM!aweinste>
has offered some interesting excerpts from the philosopher Nelson Goodman's
work on the A/D distinction. I suspect that some people will find Goodman's
considerations a little "dense," not to say hirsute, particularly
those hailing from, say, sci.electronics; I do too. One of the
subthemes here is whether or not engineers, cognitive psychologists
and philosophers are talking about the same thing when
they talk about A/D.

[Other relevant sources on A/D are Zenon Pylyshyn's book
"Computation and Cognition," John Haugeland's "Artificial
Intelligence" and David Lewis's 1971 article in Nous 5: 321-327,
entitled "Analog and Digital."]

First, some responses to Weinstein/Goodman on A/D; then some responses
to Weinstein-on-Harnad-on-Jacobs:

> systems like musical notation which are used to DEFINE a work of
> art by dividing the instances from the non-instances

I'd be reluctant to try to base a rigorous A/D distinction on the
ability to make THAT anterior distinction!

> "finitely differentiated," or "articulate." For every two characters
> K and K' and every mark m that does not belong to both, [the]
> determination that m does not belong to K or that m does not belong
> to K' is theoretically possible. ...

I'm skeptical that the A/D problem is perspicuously viewed as one of
notation, with, roughly, (1) the "digital notation" being all-or-none and
discrete and the "analog notation" failing to be, and with (2) corresponding
capacity or incapacity to discriminate among the objects they stand for.

> A scheme is syntactically dense if it provides for infinitely many
> characters so ordered that between each two there is a third.

I'm no mathematician, but it seems to me that this is not strong
enough for the continuity of the real number line. The rational
numbers are "syntactically dense" according to this definition. But
maybe you don't want real continuity...?

> semantic finite differentiation... for every two characters
> I and K' such that their compliance classes are not identical and [for]
> every object h that does not comply with both, [the] determination
> that h does not comply with K or that h does not comply with K' must
> be theoretically possible.

I hesitantly infer that the "semantics" concerns the relation between
the notational "image" (be it analog or digital) and the object it
stands for. (Could a distinction that so many people feel they have a
good intuitive handle on really require so much technical machinery to
set up? And are the different candidate technical formulations really
equivalent, and capturing the same intuitions and practices?)

> A symbol _scheme_ is analog if syntactically dense; a _system_ is
> analog if syntactically and semantically dense. ... A digital scheme,
> in contrast, is discontinuous throughout; and in a digital system the
> characters of such a scheme are one-one correlated with
> compliance-classes of a similarly discontinous set. But discontinuity,
> though implied by, does not imply differentiation...To be digital, a
> system must be not merely discontinuous but _differentiated_
> throughout, syntactically and semantically...

Does anyone who understands this know whether it conforms to, say,
analog/sampled/quantized/digital distinctions offered by Steven Jacobs
in a prior iteration? Or the countability criterion suggested by Mitch
Sundt?

> If only thoroughly dense systems are analog, and only thoroughly
> differentiated ones are digital, many systems are of neither type.

How many? And which ones? And where does that leave us with our
distinction?

Weinstein's summary:

>>To summarize: when a dense language is used to represent a dense domain, the
>>system is analog; when a discrete (Goodman's "discontinuous") and articulate
>>language maps a discrete and articulate domain, the system is digital.

What about when a discrete language is used to represent a dense
domain (the more common case, I believe)? Or the problem case of a
dense representation of a discrete domain? And what if there are no dense
domains (in physical nature)? What if even the dense/dense criterion
can never be met? Is this all just APPROXIMATELY true? Then how does
that square with, say, Steve Jacobs again, on approximation?

--------

What follows is a response to Weinstein-on-Harnad-on-Jacobs:

> Engineers are of course free to use the words "analog" and "digital"
> in their own way. However, I think that from a philosophical
> standpoint, no signal should be regarded as INTRINSICALLY analog
> or digital; the distinction depends crucially on how the signal in
> question functions in a representational system. If a continuous signal
> is used to encode digital data, the system ought to be regarded as
> digital.

Agreed that an isolated signal's A or D status cannot be assigned, and
that it depends on its relation with other signals in the
"representational system" (whatever that is) and their relations to their
sources. It also depends, I should think, on what PROPERTIES of the signal
are carrying the information, and what properties of the source are
being preserved in the signal. If the signal is continuous, but its
continuity is not doing any work (has no signal value, so to speak),
then it is irrelevant. In practice this should not be a problem, since
continuity depends on a signal's relation to the rest of the signal
set. (If the only amplitudes transmitted are either very high or very
low, with nothing in between, then the continuity in between is beside
the point.) Similarly with the source: It may be continuous, but the
continuity may not be preserved, even by a continuous signal (the
continuities may not correlate in the right way). On the other hand, I
would want to leave open the question of whether or not discrete
sources can have analogs.

> I believe this is the case in MOST real digital systems, where
> quantum mechanics is not relevant and the physical signals in
> question are best understood as continuous ones. The actual signals
> are only approximated by discontinous mathematical functions (e.g.
> a square wave).

There seems to be a lot of ambiguity in the A/D discussion as to just
what is an approximation of what. On one view, a digital
representation is a discrete approximation to a continuous object (source)
or to a (continuous) analog representation of a (continuous) object
(source). But if all objects/sources are really discontinuous, then
it's really the continuous analog representation that's approximate!
Perhaps it's all a matter of scale, but then that would make the A/D
distinction very relative and scale-dependent.


> It's a mistake to assume that transformation from "continuous" to
> "discrete" representations necessarily involves a loss of information.
> Lots of continuous functions can be represented EXACTLY in digital
> form, by, for example, encoded polynomials, differential equations, etc.

The relation between physical implementations and (formal!) mathematical
idealizations also looms large in this discussion. I do not, for
example, understand how you can represent continuous functions digitally AND
exactly. I always thought it had to be done by finite difference
equations, hence only approximately. Nor can a digital computer do
real integration, only finite summation. Now the physical question is,
can even an ANALOG computer be said to be doing true integration if
physical processes are really discrete, or is it only doing an approximation
too? The only way I can imagine transforming continuous sources into
discrete signals is if the original continuity was never true
mathematical continuity in the first place. (After all, the
mathematical notion of an unextended "point," which underlies the
concept of formal continuity, is surely an idealization, as are many
of the infinitesmal and limiting notions of analysis.) The A/D
distinction seems to be dissolving in the face of all of these
awkward details...

Anders Weinstein

unread,
Oct 30, 1986, 9:45:56 PM10/30/86
to

In article <2...@mind.UUCP> har...@mind.UUCP (Stevan Harnad) writes:
> I suspect that some people will find Goodman's
>considerations a little "dense," not to say hirsute, ...

Well you asked for a "precise" definition! Although Goodman's rigor may seem
daunting, there are really only two main concepts to grasp: "density", which
is familiar to many from mathematics, and "differentiation".

>> A scheme is syntactically dense if it provides for infinitely many
>> characters so ordered that between each two there is a third.
>
>I'm no mathematician, but it seems to me that this is not strong
>enough for the continuity of the real number line. The rational
>numbers are "syntactically dense" according to this definition. But
>maybe you don't want real continuity...?

Quite right. Goodman mentions that the difference between continuity and
density is immaterial for his purposes, since density is always sufficient to
destroy differentiation (and hence "notationality" and "digitality" as
well).

"Differentiation" pertains to our ability to make the necessary distinctions
between elements. There are two sides to the requirement: "syntactic
differentiation" requires that tokens belonging to distinct characters be at
least theoretically discriminable; "semantic differentiation" requires that
objects denoted by non-coextensive characters be theoretically discriminable
as well.

Objects fail to be even theoretically discriminable if they can be
arbitrarily similar and still count as different. For example, consider a
language consisting of straight marks such that marks differing in length by
even the smallest fraction of an inch are stipulated to belong to different
characters. This language is not finitely differentiated in Goodman's sense.
If, however, we decree that all marks between 1 and 2 inches long belong to
one character, all marks between 3 and 4 inches long belong to another, all
marks between 5 and 6 inches long belong to another, and so on, then the
language WILL qualify as differentiated.

The upshot of Goodman's requirement is that if a symbol system is to count as
"digital" (or as "notational"), there must be some finite sized "gaps",
however minute, between the distinct elements that need to be distinguished.

Some examples:

A score in musical notation can, if certain conventions are adopted, be
regarded as a digital representation, with the score denoting any performance
that complies with it. Note that although musical pitches, say, may take on
a continuous range of values, once we adopt some conventions about how much
variation in pitch is to be tolerated among the compliants of each note, the
set of note extensions can become finitely differentiated.

A scale drawing of a building, on the other hand, usually functions as an
analog representation: any difference in a line's length, however fine, is
regarded as denoting a corresponding difference in the building's size. If we
decide to interpret the drawing in some "quantized" way, however, then it can
be a digital representation.

To quote Goodman:

Consider an ordinary watch without a second hand. The hour-hand is
normally used to pick out one of twelve divisions of the half-day.
It speaks notationally [and digitally -- AW]. So does the minute hand
if used only to pick out one of sixty divisions of the hour; but if
the absolute distance of the minute hand beyond the preceding mark is
taken as indicating the absolute time elapsed since that mark was passed,
the symbol system is non-notational. Of course, if we set some limit --
whether of a half minute or one second or less -- upon the fineness of
judgment so to be made, the scheme here too may become notational.

I'm still thinking about your question of how Goodman's distinction relates
to the intuitive notion as employed by engineers or cognitivists and will
reply later.

Anders Weinstein <awei...@DIAMOND.BBN.COM>

0 new messages