[sci.electronics] A/D Distinction

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

Concerning the A/D distinction, goldfain@uiucuxe.CSO.UIUC.EDU replies:

>	Analog devices/processes are best viewed as having a continuous possible
>	range of values.  (An interval of the real line, for example.)
>	Digital devices/processes are best viewed as having an underlying
>	granularity of discrete possible values.
>	(Representable by a subset of the integers.)
>	This is a pretty good definition, whether you like it or not.
>	I am curious as to what kind of discussion you are hoping to get,
>	when you rule out the correct distinction at the outset ...

Nothing is ruled out. If you follow the ongoing discussion, you'll see
what I meant by continuity and discreteness being "nonstarters." There
seem to be some basic problems with what these mean in the real
physical world. Where do you find formal continuity in physical
devices? And if it's only "approximate" continuity, then how is the
"exact/approximate" distinction that some are proposing for A/D going
to work? I'm not ruling out that these problems may be resolvable, and
that continuous/discrete will emerge as a coherent criterion after
all. I'm just suggesting that there are prima facie reasons for
thinking that the distinction has not yet been formulated coherently
by anyone. And I'm predicting that the discussion will be surprising,
even to those who thought they had a good, crisp, rigorous idea of
what the A/D distinction was.


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

Anders Weinstein <princeton!cmcl2!harvard!DIAMOND.BBN.COM!aweinste>
has offered some interesting excerpts from the philosopher Nelson Goodman's
work on the A/D distinction. I suspect that some people will find Goodman's
considerations a little "dense," not to say hirsute, particularly
those hailing from, say, sci.electronics; I do too.  One of the
subthemes here is whether or not engineers, cognitive psychologists
and philosophers are talking about the same thing when
they talk about A/D.

[Other relevant sources on A/D are Zenon Pylyshyn's book
"Computation and Cognition," John Haugeland's "Artificial
Intelligence" and David Lewis's 1971 article in Nous 5: 321-327,
entitled "Analog and Digital."]

First, some responses to Weinstein/Goodman on A/D; then some responses
to Weinstein-on-Harnad-on-Jacobs:

>	systems like musical notation which are used to DEFINE a work of
>	art by dividing the instances from the non-instances

I'd be reluctant to try to base a rigorous A/D distinction on the
ability to make THAT anterior distinction!

>	"finitely differentiated," or "articulate." For every two characters
>	K and K' and every mark m that does not belong to both, [the]
>	determination that m does not belong to K or that m does not belong
>	to K' is theoretically possible. ...

I'm skeptical that the A/D problem is perspicuously viewed as one of
notation, with, roughly, (1) the "digital notation" being all-or-none and
discrete and the "analog notation" failing to be, and with (2) corresponding
capacity or incapacity to discriminate among the objects they stand for.

>	A scheme is syntactically dense if it provides for infinitely many
>	characters so ordered that between each two there is a third.

I'm no mathematician, but it seems to me that this is not strong
enough for the continuity of the real number line. The rational
numbers are "syntactically dense" according to this definition. But
maybe you don't want real continuity...?

>	semantic finite differentiation... for every two characters
>	I and K' such that their compliance classes are not identical and [for]
>	every object h that does not comply with both, [the] determination
>	that h does not comply with K or that h does not comply with K' must
>	be theoretically possible.

I hesitantly infer that the "semantics" concerns the relation between
the notational "image" (be it analog or digital) and the object it
stands for. (Could a distinction that so many people feel they have a
good intuitive handle on really require so much technical machinery to
set up? And are the different candidate technical formulations really
equivalent, and capturing the same intuitions and practices?)

>	A symbol _scheme_ is analog if syntactically dense; a _system_ is
>	analog if syntactically and semantically dense. ... A digital scheme,
>	in contrast, is discontinuous throughout; and in a digital system the
>	characters of such a scheme are one-one correlated with
>	compliance-classes of a similarly discontinous set. But discontinuity,
>	though implied by, does not imply differentiation...To be digital, a
>	system must be not merely discontinuous but _differentiated_ 
>	throughout, syntactically and semantically...

Does anyone who understands this know whether it conforms to, say,
analog/sampled/quantized/digital distinctions offered by Steven Jacobs
in a prior iteration? Or the countability criterion suggested by Mitch
Sundt?

>	If only thoroughly dense systems are analog, and only thoroughly
>	differentiated ones are digital, many systems are of neither type.

How many? And which ones? And where does that leave us with our
distinction?

Weinstein's summary:

>>To summarize: when a dense language is used to represent a dense domain, the
>>system is analog; when a discrete (Goodman's "discontinuous") and articulate
>>language maps a discrete and articulate domain, the system is digital.

What about when a discrete language is used to represent a dense
domain (the more common case, I believe)? Or the problem case of a
dense representation of a discrete domain? And what if there are no dense
domains (in physical nature)? What if even the dense/dense criterion
can never be met? Is this all just APPROXIMATELY true? Then how does
that square with, say, Steve Jacobs again, on approximation?

--------

What follows is a response to Weinstein-on-Harnad-on-Jacobs:

>	Engineers are of course free to use the words "analog" and "digital"
>	in their own way. However, I think that from a philosophical
>	standpoint, no signal should be regarded as INTRINSICALLY analog
>	or digital; the distinction depends crucially on how the signal in
>	question functions in a representational system. If a continuous signal
>	is used to encode digital data, the system ought to be regarded as
>	digital.

Agreed that an isolated signal's A or D status cannot be assigned, and
that it depends on its relation with other signals in the
"representational system" (whatever that is) and their relations to their
sources. It also depends, I should think, on what PROPERTIES of the signal
are carrying the information, and what properties of the source are
being preserved in the signal. If the signal is continuous, but its
continuity is not doing any work (has no signal value, so to speak),
then it is irrelevant. In practice this should not be a problem, since
continuity depends on a signal's relation to the rest of the signal
set. (If the only amplitudes transmitted are either very high or very
low, with nothing in between, then the continuity in between is beside
the point.) Similarly with the source: It may be continuous, but the
continuity may not be preserved, even by a continuous signal (the
continuities may not correlate in the right way). On the other hand, I
would want to leave open the question of whether or not discrete
sources can have analogs.

>	I believe this is the case in MOST real digital systems, where
>	quantum mechanics is not relevant and the physical signals in
>	question are best understood as continuous ones. The actual signals
>	are only approximated by discontinous mathematical functions (e.g.
>	a square wave).

There seems to be a lot of ambiguity in the A/D discussion as to just
what is an approximation of what. On one view, a digital
representation is a discrete approximation to a continuous object (source)
or to a (continuous) analog representation of a (continuous) object
(source). But if all objects/sources are really discontinuous, then
it's really the continuous analog representation that's approximate!
Perhaps it's all a matter of scale, but then that would make the A/D
distinction very relative and scale-dependent.


>	It's a mistake to assume that transformation from "continuous" to
>	"discrete" representations necessarily involves a loss of information.
>	Lots of continuous functions can be represented EXACTLY in digital
>	form, by, for example, encoded polynomials, differential equations, etc.

The relation between physical implementations and (formal!) mathematical
idealizations also looms large in this discussion. I do not, for
example, understand how you can represent continuous functions digitally AND
exactly. I always thought it had to be done by finite difference
equations, hence only approximately. Nor can a digital computer do
real integration, only finite summation. Now the physical question is,
can even an ANALOG computer be said to be doing true integration if
physical processes are really discrete, or is it only doing an approximation
too? The only way I can imagine transforming continuous sources into
discrete signals is if the original continuity was never true
mathematical continuity in the first place. (After all, the
mathematical notion of an unextended "point," which underlies the
concept of formal continuity, is surely an idealization, as are many
of the infinitesmal and limiting notions of analysis.) The A/D
distinction seems to be dissolving in the face of all of these
awkward details...


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

harnad@mind.UUCP (Stevan Harnad) (11/01/86)

Summary: Reply to 5 candidate formulations of the A/D distinction

------
(1)
ken@rochester.arpa writes:

>	I think the distinction is simply this: digital deals with a finite set
>	of discrete {voltage, current, whatever} levels, while analog deals
>	with a *potentially* infinite set of levels. Now I know you are going
>	to say that analog is discrete at the electron noise level but the
>	circuits are built on the assumption that the spectrum is continuous.
>	This leads to different mathematical analyses.

It sounds as if a problem of fact is being remedied by an assumption here.
Nor do potential infinities appear to remedy the problem; there are perfectly
discrete potential infinities. The A/D distinction is again looking
approximate, relative and scale-dependent, hence, in a sense, arbitrary.

>	Sort of like infinite memory Turing machines, we don't have them but
>	we program computers as if they had infinite memory and in practice
>	as long as we don't run out, it's ok. So as long as we don't notice
>	the noise in analog, it serves.

An approximation to an infinite rote memory represents no problem of
principle in computing theory and practice. But an approximation to an
exact distinction between the "exact" and the "approximate" doesn't seem
satisfactory. If there is an exact distinction underlying actual
engineering practice, at least, it would be useful to know what it
was, in place of intuitions that appear to break down as soon as they
are made precise.

--------
(2)
cuuxb!mwm (Marc Mengel) writes:

>	Digital is essentially a subset of analog, where the range of
>	properties used to represent information is grouped into a
>	finite set of values...
>	Analog, on the other hand, refers to using a property to directly
>	represent an infinite range of values with a different infinite
>	range of values.

This sounds right, as far as it goes. D may indeed be a subset of A.
To use the object--transformation--image vocabulary again: When an object
is transformed into an image with only finite values, then the transform
is digital. (What about combinations of image values?) When an
infinite-valued object is transformed into an infitinite-valued (and
presumably covariant) image, then the transform is analog. I assume
the infinities in question have the right cardinality (i.e.,
uncountable). Questions: (i) Do discrete objects, with only finite or
countably infinite properties, not qualify to have analogs? (ii) What does
"directly represent" mean? Is there something indirect about finiteness?
(iii) What if there are really no such infinities, physically, on either
the object end or the image end?

May I interject at this point the conjecture that what seems to be
left out of all these A/D considerations so far (not just this module)
is that discretization is usually not the sole means or end of digital
representation. What about symbolic representation? What turns a
discretized, approximate image of an object into a symbolic
representation, manipulable by formal rules and semantically
interpretable as being a representation OF that object? (But perhaps
this is getting a little ahead of ourselves.)

>	This is why slide-rules are considered analog, you are USING distance
>	rather than voltage, but you can INTERPRET a distance as precisely
>	as you want. An abacus, on the otherhand also USES distance, but
>	where a disk is MEANS either one thing or another, and it takes
>	lots of disks to REPRESENT a number. An abacus then, is digital.

(No comment. Upper case added.)

--------
(3)
<bcsaic!ray> writes:

>	(An) analog is a (partial) DUPLICATE (or abstraction) 
>	of some material thing or some process, which contains 
>	(it is hoped) the significant characteristics and properties 
>	of the original.

And a digital representation can't be any of these things? "Duplicate"
in what sense? An object's only "exact" double is itself. Once we move
off in time and space and properties, more precise notions of
"duplicate" are needed than the intuitive ones. Sharing the SAME
physical properties (e.g., obeying the same differential equations
[thanks to Si Kochen for that criterion])? Or perhaps just ANALOGS of
them? But then that gets a bit circular.

>	A digital device or method operates on symbols, rather than 
>	physical (or other) reality.  Analog computers may operate on 
>	(real) voltages and electron flow, while digital computers 
>	operate on symbols and their logical interrelationships.

On the face of it, digital computers "operate" on the same physical
properties and principles that other physical mechanisms do. What is
different is that some aspects of their operations are INTERPRETABLE
in special ways, namely, as rule-governed operations of symbol tokens
that STAND FOR something else. One of the burdens of this discussion
is to determine precisely what role the A/D distinction plays in that
phenomenon, and vice versa. What, to start with, is a symbol?

>	Digital operations are formal; that is they treat form rather 
>	than content, and are therefore always deductive, while the 
>	behavior of real things and their analogs is not.

Unfortunately, however, these observations are themselves a bit too
informal. What is it to treat form rather than content? One candidate
that's in the air is that it is to manipulate symbols according to
certain formal rules that indicate what to do with the symbol tokens
on the basis of their physical shapes only, rather than what the tokens or
their manipulations or combinations "stand for" or "mean." It's not clear
that this definition is synonymous with symbol manipulation's always
being "deductive." Perhaps it's interpretable as performing deductions,
but as for BEING deductions, that's another question. And how can
digital operations stand in contrast to the behavior of "real things"?
Aren't computers real things?

>	It is one of my (unpopular) assertions that the central nervous 
>	system of living organisms (including  myself) is best understood 
>	as an analog of "reality"; that most interesting behavior 
>	such as induction and the detection of similarity (analogy and 
>	metaphor) cannot be accomplished with only symbolic, and 
>	therefore deductive, methods.

Such a conjecture would have to be supported not only by a clear
definition of all of the ambiguous theoretical concepts used
(including "analog"), but by reasons and evidence. On the face of it,
various symbol-manipulating devices in AI do do "induction" and "similarity
detection." As to the role of analog representation in the brain:
Perhaps we'd better come up with a viable literal formulation of the
A/D distinction; otherwise we will be restricted to figurative
assertions. (Talking too long about the analog tends to make one
lapse into analogy.)

--------
(4)
lanl!a.LANL.ARPA!crs (Charlie Sorsby) writes:

>	It seems to me that the terms as they are *usually* used today
>	are rather bastardized... when the two terms originated they referred
>	to two ways of "computing" and *not* to kinds of circuits at all.
>	The analog simulator (or, more popularly, analog computer) "computed"
>	by analogy.  And, old timers may recall, they weren't all electronic
>	or even electrical.

But what does "compute by analogy" mean?

>	Digital computers (truly so) on the other hand computed with
>	*digits* (i.e.  numbers). Of course there was (is) analogy involved
>	here too but that was a "higher-order term" in the view and was
>	conveniently ignored as higher order terms often are.

What is a "higher-order term"? And what's the difference between a
number and a symbol that's interpretable as a number? That sounds like
a "higher-order" consideration too.

>	In the course of time, the term analog came to be used for those
>	electronic circuits *like* those used in analog simulators (i.e.
>	circuits that work with continuous quantities). And, of course,
>	digital came to refer to those circuits *like* those used in digital
>	computers (i.e. those which work with discrete or quantized quantities.

You guessed my next question: What does "like" mean, and why does
the underlying distinction correlate with continuous and discrete
circuit properties?

>	Whether a quantity is continuous or discrete depends on such things
>	as the attribute considered, to say nothing of the person doing the
>	considering, hence the vagueness of definition and usage of the
>	terms. This vagueness seems to have worsened with the passage of time.

I couldn't agree more. And an attempt to remedy that is one of the
objects of this exercise.

--------
(5)
sundt@mitre.ARPA writes:

>	Coming from a heavily theoretical undergraduate physics background, 
>	it seems obvious that the ONLY distinction between the analog and
>	digital representation is the enumerability of the relationships
>	under the given representation.

>	First of all, the form of digital representation must be split into
>	two categories, that of a finite representation, and that of a 
>	countably infinite representation.  Turing machines assume a countably
>	infinite representation, whereas any physically realizable digital
>	computer must inherently assume a finite digital representation.

>	Second, there must be some predicate O(a,b) defined over all the a
>	and b in the representation such that the predicate O(a,b) yields
>	only one of a finite set of symbols, S(i) (e.g. "True/False").
>	If such a predicate does not exist, then the representation is
>	arguably ambiguous and the symbols are "meaningless".

>	Looking at all the (a,b) pairs that map the O(a,b) predicate into
>	the individual S(i):
>	ANALOG REPRESENTATION: the (a,b) pairs cannot be enumerated for ALL S(i)
>	COUNTABLY-INFINITE DIGITAL REPRESENTATION: the (a,b) pairs cannot be
>	enumerated for ALL S(i).
>	FINITE DIGITAL REPRESENTATION: all the (a,b) pairs for all the S(i)
>	CAN be enumerated.

>	This distinguishes the finite digital representation from the other two 
>	representations. I believe this is the distinction you were asking
>	about. The distinction between the analog representation and the
>	countably-infinite digital representation is harder to identify.
>	I sense it would require the definition of a mapping M(a,b) onto the
>	representation itself, and the study of how this mapping relates to
>	the O(a,b) predicate. That is, is there some relationship between
>	O(?,?), M(?,?) and the (a,b) that is analgous to divisibility in
>	Z and R.  How this would be formulated escapes me.

You seem to have here a viable formal definition of something
that can be called a "analog representation," based on the
formal notion of continuity and nondenumerability. The question seems to
remain, however, whether it is indeed THIS precise sense of
analog that engineers, cognitive psychologists and philosophers are
informally committed to, and, if so, whether it is indeed physically
realizable. It would be an odd sort of representation if it were only
an unimplementable abstraction. (Let me repeat that the finiteness of
physical computers is NOT an analogous impediment for turing-machine
theory, because the finite approximations continue to make sense,
whereas both the finite and the denumerably infinite approximation to
the A/D distinction seems to vitiate the distinction.)

It's not clear, by the way, that it wasn't in fact the (missing)
distinction between a countable and an uncountable "representation" that
would have filled the bill. But I'll assume, as you do, that some suitable
formal abstraction would capture it. THe question remains: Does that
capture our A/D intuitions too? And does it sort out all actual (physical)
A/D cases correctly?

--------

The rest of Mitch Sundt's reply pertains also to the 
"Searle, Turing, Categories, Symbols" discussion that
is going on in parallel with this one:

>	we can characterize when something is NOT intelligent,
>	but are unable to define when it is.

I don't see at all why this is true, apart from the fact that
confirming or supporting an affirmation is always more open-ended
than confirming or supporting a denial.

>	[Analogously] Any attempt to ["define chaos"] would give it a fixed
>	structure, and therefore order... Thus, it is the quality that
>	is lost when a signal is digitized to either a finite or a
>	countably-infinite digital representation.  Analog representations
>	would not suffer this loss of chaos.

Maybe they wouldn't, if they existed as you defined them, and if chaos
were worth preserving. But I'm beginning to sense a gradual departure from
the precision of your earlier formal abstractions in the direction of
metaphor here...

>	Carrying this thought back to "intelligence," intelligence is the
>	quality that is lost when the behavior is categorized among a set
>	of values. Thus, to detect intelligence, you must use analog
>	representations (and meta-representations). And I am forced to
>	conclude that the Turing test must always be inadequate in assessing
>	intelligence, and that you need to be an intelligent being to
>	*know* an intelligent being when you see one!

I think we have now moved from equating "analog" with a precise (though
not necessarily correct) formal notion to a rather free and subjective
analogy. I hope it's clear that the word "conclude" here does not have
quite the same deductive force it had in the earlier considerations.

>	Thinking about it further, I would argue, in view of what I just
>	said, that people are by construction only "faking" intelligence,
>	and that we have achieved a complexity whereby we can percieve *some*
>	of the chaos left by our crude categorizations (perhaps through
>	multiple categorizations of the same phenomena), and that this
>	perception itself gives us the appearance of intelligence. Our
>	perceptions reveal only the tip of the chaotic iceberg, however,
>	by definition. To have true intelligence would require the
>	perception of *ALL* the chaos.

Thinking too much about the mind/body problem will do that to you
sometimes.

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

bradley@think.COM (Bradley Kuszmaul) (11/02/86)

The distinction between digital and analog is in our minds.

"digital" and "analog" are just names of design methodologies that
engineers use to build large systems.  "digital" is not a property of
a signal, or a machine, but rather a property of the design of the
machine.  The design of the machine may not be a part of the machine.
(For example, in many computers, the design of the computer is never
given to the customer.)

If I gave you a music box (which played music naturally), and
you might not be able to tell whether it was digital or analog (even
if you could open it up and look at it and probe various things with
oscilliscopes or other tools).

  Suppose I gave you a set of schematics for the box in which
everything was described in terms of voltages and currents, and which
included an explanation of how the box worked using continuous
mathematical functions.  The schematics might explain how various
subcomponents interpreted their inputs as real numbers (even though
the inputs might be a far cry from real numbers e.g. due to the
quantization of everything by physicists).  You would probably
conclude that the music box was an analog device.

  Suppose, on the other hand, that I gave you a set of schematics for
the same box in which all the subcomponents were described in terms of
discrete formulas (e.g. truth tables), and included an explanation of
how the inputs from reality are interpreted by the hardware as
discrete values (even though the inputs might be a far cry from
discrete values e.g. due to ``noise'' from the uncertainty of
everything).  You would probably conclude that the music box was a
digital device.

  The idea is that a "digital" designer and "analog" designer might
very well come up with the same hardware to solve some problem, but
they would just understand the behaviour differently.

  If designers could handle the complexity of thinking about
everything, they would not use any of these abstractions, but would
just build hardware that works.  Real designers, on the other hand,
must control the complexity of the systems they design, and the
"digital" and "analog" design methodologies control the complexity of
the design while preserving enough of reality to allow the engineer to
make progress.

  If you buy my idea that digital and analog litterally are in our
minds, rather than in the hardware, then the problem is not one of
deciding whether some particular system is digital (such questions
would be considered ill-posed).  The real problem, as I view it, is to
distinguish between the digital and analog design methodologies.

  We can try to understand the difference by looking at the cases
where we would use one versus the other.

   We often use digital systems when the answer we want is a number.
    (such as the decimal expansion of PI to 1000 digits)
   We often use analog systems when the answer we want is something
    physical (I don't really have good examples.  Many of the things
    which were traditionally analog are going digital for some of the
    reasons described below.  e.g. music, pictures (still and moving),
    the control of an automobile engine or the laundry machine)
   We often use digital systems when there are lots of cheap
    digital components available.  (This is not really a circular
    argument.  The reason I personally might build a digital control system for
    something rather than an analog control system is that digital
    components are cheap for me to buy.)
   Digital components are nice because they have specifications which
    are relatively straightforward to test.  To test an analog
    component seems harder.  Because they are easier to test,
    they can be considered more "uniform" than analog components (a
    TTL "OR" gate from one mfr is about the same as a TTL "OR" gate
    from another).  (The same argument goes the other way too...)
   Analog components are nice because sometimes they do just what you
    wanted.  For example, the connection from the gas peddle to the
    throttle on the carburator of a car can be made by a mechanical
    linkage which gives output which is a (approximately) continuous
    function of the input position.  To "fly by wire" (i.e. to use a
    digital linkage) requires a lot more technology.


 (When I say "we use a digital system", I really mean that "we design
such a system using a digital methodology", and correspondingly for
the analog case)

 There are of course all sorts of places between "digital" and
"analog".  A system may have digital subsystems and analog subsystems
and there may be analog subsystems inside the digital subsystems and
it goes on and on.  This sort of thing makes the decision about
whether some particular design methodology is digital or analog hard.

 -Bradley
   bradley@think.com (arpa)
   think!bradley (uucp)

lodman@ncr-sd.UUCP (Mike Lodman) (11/03/86)

Digital devices count, analog devices measure. This seems to me like
the most basic difference between them, regardless of the circuitry
involved.

Too many people responding to this question seem to be confusing the 
definitions of analog and continuous, and of discrete and digital.
They do not mean the same thing at all. 

Michael Lodman
Advanced Development
NCR Corporation E&M San Diego