[sci.electronics] The Analog/Digital Distinction

harnad@mind.UUCP (Stevan Harnad) (10/28/86)

Steven R. Jacobs (utah-cs!jacobs) of the University of Utah CS Dept
has given me permission to post his contribution to defining the A/D
distinction. It appears below, followed at the very end by some comments
from me.
[Will someone with access please post a copy to sci.electronics?]

>> One prima facie non-starter: "continuous" vs. "discrete" physical processes.

>I apologize if this was meant to avoid discussion of continuous/discrete
>issues relating to analog/digital representations.  I find it difficult
>to avoid talking in terms of "continuous" and "discrete" processes when
>discussing the difference between analog and digital signals.  I am
>approaching the question from a signal processing point of view, so I
>tend to assume that "real" signals are analog signals, and other methods
>of representing signals are used as approximations of analog signals (but
>see below about a physicist's perspective).  Yes, I realize you asked for
>objective definitions.  For my own non-objective convenience, I will use
>analog signals as a starting point for obtaining other types of signals.
>This will assist in discussing the operations used to derive non-analog
>signals from analog signals, and in discussing the effects of the operations
>on the mathematics involved when manipulating the various types of signals
>in the time and frequency domains.
>
>The distinction of continuous/discrete can be applied to both the amplitude
>and time axes of a signal, which allows four types of signals to be defined.
>So, some "loose" definitions:
>
>Analog signal -- one that is continuous both in time and amplitude, so that
>	the amplitude of the signal may change to any amplitude at any time.
>	This is what many electrical engineers might describe as a "signal".
>
>Sampled signal -- continuous in amplitude, discrete in time (usually with
>	eqully-spaced sampling intervals).  Signal may take on any amplitude,
>	but the amplitude changes only at discrete times.  Sampled signals
>	are obtained (obviously?) by sampling analog signals.  If sampling is
>	done improperly, aliasing will occur, causing a loss of information.
>	Some (most?) analog signals cannot be accurately represented by a
>	sampled signal, since only band-limited signals can be sampled without
>	aliasing.  Sampled signals are the basis of Digital Signal Processing,
>	although digital signals are invariably used as an approximation of
>	the sampled signals.
>
>Quantized signal -- piece-wise continuous in time, discrete in amplitude.
>	Amplitude may change at any time, but only to discrete levels.  All
>	changes in amplitude are steps.
>
>Digital signal -- one that is discrete both in time and amplitude, and may
>	change in (discrete) amplitude only at certain (discrete, usually
>	uniformly spaced) time intervals.  This is obtained by quantizing
>	a sampled signal.
>
>Other types of signals can be made by combining these "basic" types, but
>that topic is more appropriate for net.bizarre than for sci.electronics.
>
>The real distinction (in my mind) between these representations is the effect
>the representation has on the mathematics required to manipulate the signals.
>
>Although most engineers and computer scientists would think of analog signals
>as the most "correct" representations of signals, a physicist might argue that
>the "quantum signal" is the only signal which corresponds to the real world,
>and that analog signals are merely a convenient approximation used by
>mathematicians.
>
>One major distinction (from a mathematical point of view) between sampled
>signals and analog signals can be best visualized in the frequency domain.
>A band-limited analog signal has a Fourier transform which is finite.  A
>sampled representation of the same signal will be periodic in the Fourier
>domain.  Increasing the sampling frequency will "spread out" the identical
>"clumps" in the FT (fourier transform) of a sampled signal, but the FT
>of the sampled signal will ALWAYS remain periodic, so that in the limit as
>the sampling frequency approaches infinity, the sampled signal DOES NOT
>become a "better" approximation of the analog signal, they remain entirely
>distinct.  Whenever the sampling frequency exceeds the Nyquist frequency,
>the original analog signal can be exactly recovered from the sampled signal,
>so that the two representations contain the equivalent information, but the
>two signals are not the same, and the sampled signal does not "approach"
>the analog signal as the sampling frequency is increased.  For signals which
>are not band-limited, sampling causes a loss of information due to aliasing.
>As the sampling frequency is increased, less information is lost, so that the
>"goodness" of the approximation improves as the sampling frequency increases.
>Still, the sampled signal is fundamentally different from the analog signal.
>This fundamental difference applies also to digital signals, which are both
>quantized and sampled.
>
>Digital signals are usually used as an approximation to "sampled" signals.
>The mathematics used for digital signal processing is actually only correct
>when applied to sampled signals (maybe it should be called "Sampled Signal
>Processing" (SSP) instead).  The approximation is usually handled mostly by
>ignoring the "quantization noise" which is introduced when converting a
>sampled analog signal into a digital signal.  This is convenient because it
>avoids some messy "details" in the mathematics.  To properly deal with
>quantized signals requires giving up some "nice" properties of signals and
>operators that are applied to signals.  Mostly, operators which are applied
>to signals become non-commutative when the signals are discrete in amplitude.
>This is very much related to the "Heisenburg uncertainty principle" of
>quantum mechanics, and to me represents another "true" distinction between
>analog and digital signals.  The quantization of signals represents a loss of
>information that is qualitatively different from any loss of information that
>occurs from sampling.  This difference is usally glossed over or ignored in
>discussions of signal processing.
>
>Well, those are some half-baked ideas that come to my mind.  They are probably
>not what you are looking for, so feel free to post them to /dev/null.
>
>Steve Jacobs
>
- - - - - - - - - - - - - - - - - - - - - - - - 

REPLY:

>	I apologize if this was meant to avoid discussion of continuous/discrete
>	issues relating to analog/digital representations.

It wasn't meant to avoid discussion of continuous/discrete at all;
just to avoid a simple-minded equation of C/D with A/D, overlooking
all the attendant problems of that move. You certainly haven't done that
in your thoughtful and articulate review and analysis.

>	I tend to assume that "real" signals are analog signals, and other
>	methods of representing signals are used as approximations of analog
>	signals.

That seems like the correct assumption. But if we shift for a moment
from considering the A or D signals themselves and consider instead
the transformation that generated them, the question arises: If "real"
signals are analog signals, then what are they analogs of? Let's
borrow some formal jargon and say that there are (real) "objects,"
and then there are "images" of them under various types of
transformations. One such transformation is an analog transformation.
In that case the image of the object under the (analog) transformation
can also be called an "analog" of the object. Is that an analog signal?

The approximation criterion also seems right on the mark. Using the
object/transformation/image terminology again, another kind of a
transformation is a "digital" transformation. The image of an object
(or of the analog image of an object) under a digital transformation
is "approximate" rather than "exact." What is the difference between
"approximate" and "exact"? Here I would like to interject a tentative
candidate criterion of my own: I think it may have something to do with
invertibility. A transformation from object to image is analog if (or
to the degree that) it is invertible. In a digital approximation, some
information or structure is irretrievably lost (the transformation
is not 1:1).

So, might invertibility/noninvertibility have something to do with the
distinction between an A and a D transformation? And do "images" of
these two kinds count as "representations" in the sense in which that
concept is used in AI, cognitive psychology and philosophy (not
necessarily univocally)?  And, finally, where do "symbolic"
representations come in? If we take a continuous object and make a
discrete, approximate image of it, how do we get from that to a
symbolic representation?


>	Analog signal -- one that is continuous both in time and amplitude.

>	Sampled signal -- continuous in amplitude, discrete in time...
>	If sampling is done improperly, aliasing will occur, causing a
>	loss of information.

>	Quantized signal -- piece-wise continuous in time, discrete in
>	amplitude.

>	Digital signal -- one that is discrete both in time and amplitude...
>	This is obtained by quantizing a sampled signal.

Both directions of departure from the analog, it seems, lose
information, unless the interpolations of the gaps in either time or
amplitude can be accurately made somehow. Question: What if the
original "object" is discrete in the first place, both in space and
time? Does that make a digital transformation of it "analog"? I
realize that this is violating the "signal" terminology, but, after all,
signals have their origins too. Preservation and invertibility of
information or structure seem to be even more general features than
continuity/discreteness. Or perhaps we should be focusing on the
continuity/noncontinuity of the transformations rather than the
objects?

>	a physicist might argue that the "quantum signal" is the only
>	signal which corresponds to the real world, and that analog
>	signals are merely a convenient approximation used by mathematicians.

This, of course, turns the continuous/discrete and the exact/approximate
criteria completely on their heads, as I think you recognize too. And
it's one of the things that makes continuity a less straightforward basis
for the A/D distinction.

>	Mostly, operators which are applied to signals become
>	non-commutative when the signals are discrete in amplitude.
>	This is very much related to the "Heisenburg uncertainty principle"
>	of quantum mechanics, and to me represents another "true" distinction
>	between analog and digital signals. The quantization of signals
>	represents a loss of information that is qualitatively different from
>	any loss of information that occurs from sampling.

I'm not qualified to judge whether this is an anolgy or a true quantum
effect. If the latter, then of course the qualitative difference
resides in the fact that (on current theory) the information is
irretrievable in principle rather than merely in practice.

>	Well, those are some half-baked ideas that come to my mind. 

Many thanks for your thoughtful contribution. I hope the discussion
will continue "baking."


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

aweinste@Diamond.BBN.COM (Anders Weinstein) (10/29/86)

> From Stevan Harnad:
>
>>	Analog signal -- one that is continuous both in time and amplitude.
>>     ...
>>	Digital signal -- one that is discrete both in time and amplitude...
>>	This is obtained by quantizing a sampled signal.
>
>                                          Question: What if the
>original "object" is discrete in the first place, both in space and
>time? Does that make a digital transformation of it "analog"? I

Engineers are of course free to use the words "analog" and "digital" in their
own way.  However, I think that from a philosophical standpoint, no signal
should be regarded as INTRINSICALLY analog or digital; the distinction
depends crucially on how the signal in question functions in a
representational system.  If a continuous signal is used to encode digital
data, the system ought to be regarded as digital.

I believe this is the case in MOST real digital systems, where quantum
mechanics is not relevant and the physical signals in question are best
understood as continuous ones. The actual signals are only approximated by
discontinous mathematical functions (e.g. a square wave).

>                                              The image of an object
>(or of the analog image of an object) under a digital transformation
>is "approximate" rather than "exact." What is the difference between
>"approximate" and "exact"? Here I would like to interject a tentative
>candidate criterion of my own: I think it may have something to do with
>invertibility. A transformation from object to image is analog if (or
>>to the degree that) it is invertible. In a digital approximation, some
>information or structure is irretrievably lost (the transformation
>is not 1:1).
> ...

It's a mistake to assume that transformation from "continuous" to "discrete"
representations necessarily involves a loss of information. Lots of
continuous functions can be represented EXACTLY in digital form, by, for
example, encoded polynomials, differential equations, etc.

Anders Weinstein