[sci.electronics] The A/D Distinction: 5 More Replies

harnad@mind.UUCP (Stevan Harnad) (10/29/86)

Here are 5 more replies I've received on the A/D distinction. I'll
respond in a later module. [Meantime, could someone post this to
sci.electronics, to which I have no access, please?]

------
(1)
Message-Id: <8610271622.11564@ur-seneca.arpa>
In-Reply-To: <13@mind.UUCP>
U of Rochester, CS Dept, Rochester, NY
ken@rochester.arpa
CS Dept., U. of Roch., NY 14627.
Mon, 27 Oct 86 11:22:10 -0500

I think the distinction is simply this: digital deals with a finite set
of discrete {voltage, current, whatever} levels, while analog deals
with a *potentially* infinite set of levels. Now I know you are going
to say that analog is discrete at the electron noise level but the
circuits are built on the assumption that the spectrum is continuous.
This leads to different mathematical analyses.

Sort of like infinite memory Turing machines, we don't have them but we
program computers as if they had infinite memory and in practice as
long as we don't run out, it's ok. So as long as we don't notice the
noise in analog, it serves.

--------
(2)
Tue, 28 Oct 86 20:56:36 est
cuuxb!mwm
AT&T-IS, Software Support, Lisle IL

In article <7@mind.UUCP> you write:
>Engineers and computer scientists seem to feel that they have a
>I have some ideas, but I'll save them until I sample some of what the
>Net nets. The ground-rules are these: Try to propose a clear and
>objective definition of the analog/digital distinction that is not
>arbitrary, relative, a matter of degree, or loses in the limit the
>intuitive distinction it was intended to capture.
>
>One prima facie non-starter: "continuous" vs. "discrete" physical
>processes.
>
>Stevan Harnad (princeton!mind!harnad)

	Analog and digital are two ways of *representing* information. A
	computer can be said to be analog or digital (or both!) depending
	upon how the information is represented within the machine, and
	particularly, how the information is represented when actual
	computation takes place.

	Digital is essentially a subset of analog, where the range of
	properties used to represent information is grouped into a
	finite set of values.  For example, the classic TTL digital
	model uses electrical voltage to represent values, and is
	grouped into the following:
	above +5 volts -- not used
	+2..+5 volts (approx) -- a binary 1
	0..+2 volts (approx)  -- a binary 0
	negatvie voltage -- not used.

	Important to distinguish here is the grouping of the essentially
	infinite possiblities of voltage into a finite set of values.
	A system that used 4 voltage ranges to represent a base 4 number
	system would still be digital.  Note that this means that it
	takes several voltages to represent an arbitrarily precise number

	Analog, on the other hand, refers to using a property to directly
	represent an infinite range of values with a different infinite
	range of values: for example representing the number 15 with
	15 volts, and the number 100 with 100 volts.  Note that this means
	it takes 1 voltage to represent an arbitrarily precise number.

	This is my pot-shot at defining analog/digital and how they relate,
	and how they are used in most systems i am familiar with.  I think
	these make reasonably clear what it is that  "analog to digital"
	converters (and "digital to analog") do.

	This is why slide-rules are considered analog, you are using distance
	rather than voltage, but you can interpret a distance as precisely
	as you want. An abacus, on the otherhand also uses distance, but
	where a disk is means either one thing or another, and it takes
	lots of disks to represent a number. An abacus then, is digital.

 Marc Mengel
 ...!ihnp4!cuuxb!mwm

--------
(3)
<bcsaic!ray>
Thu, 23 Oct 86 13:10:47 pdt
Message-Id: <8610232010.AA18462@bcsaic.LOCAL>


Try this:

(An) analog is a (partial) DUPLICATE (or abstraction) 
of some material thing or some process, which contains 
(it is hoped) the significant characteristics and properties 
of the original.  An analog is driven by situations and events 
outside itself, and its usefulness is that the analog may be 
observed and, via induction, the original understood. 

A digital device or method operates on symbols, rather than 
physical (or other) reality.  Analog computers may operate on 
(real) voltages and electron flow, while digital computers 
operate on symbols and their logical interrelationships.

Digital operations are formal; that is they treat form rather 
than content, and are therefore always deductive, while the 
behavior of real things and their analogs is not.  (Heresy follows).
It is one of my (unpopular) assertions that the central nervous 
system of living organisms (including  myself) is best understood 
as an analog of "reality"; that most interesting behavior 
such as induction and the detection of similarity (analogy and 
metaphor) cannot be accomplished with only symbolic, and 
therefore deductive, methods.

--------
(4)
Mon, 27 Oct 86 16:04:36 mst
lanl!a.LANL.ARPA!crs (Charlie Sorsby)
Message-Id: <8610272304.AA25429@a.ARPA>
References: <7@mind.UUCP> <45900003@orstcs.UUCP>, <13@mind.UUCP>

Stevan,
I've been more or less following your query and the resulting articles.

It seems to me that the terms as they are *usually* used today are rather
bastardized.  Don't you think that when the two terms originated they
referred to two ways of "computing" and *not* to kinds of circuits at all.

The analog simulator (or, more popularly, analog computer) "computed" by
analogy.  And, old timers may recall, they weren't all electronic or even
electrical.  I vaguely recall reading about an analog simultaneous
linear-equation solver that comprised plates (rectangular, I think), cables
and pulleys.

Digital computers (truly so) on the other hand computed with *digits* (i.e.
numbers).  Of course there was (is) analogy involved here too but that was
a "higher-order term" in the view and was conveniently ignored as higher
order terms often are.

In the course of time, the term analog came to be used for those
electronic circuits *like* those used in analog simulators (i.e. circuits
that work with continuous quantities).  And, of course, digital came to
refer to those circuits *like* those used in digital computers (i.e. those
which work with discrete or quantized quantities.

Whether a quantity is continuous or discrete depends on such things as the
attribute considered to say nothing of the person doing the considering,
hence the vagueness of definition and usage of the terms.  This vagueness
seems to have worsened with the passage of time.

Best regards,

Charlie Sorsby
	...!{cmcl2,ihnp4,...}!lanl!crs
				crs@lanl.arpa

--------
(5)
Message-Id: <8610280022.AA16966@mitre.ARPA>
Organization: The MITRE Corp., Washington, D.C.
sundt@mitre.ARPA
Date: Mon, 27 Oct 86 19:22:21 -0500

Having read your messages for the last few months, I
couldn't help but take a stab on this issue.

Coming from a heavily theoretical undergraduate physics background, 
it seems obvious that the ONLY distinction between the analog and
digital representation is the enumerability of the relationships
under the given representation.

First of all, the form of digital representation must be split into
two categories, that of a finite representation, and that of a 
countably infinite representation.  Turing machines assume a countably
infinite representation, whereas any physically realizable digital computer 
must inherently assume a finite digital representation (be it ever so large).

Thus, we have three distinctions to make:

1) Analog / Finite Digital
2) Countably-Infinite Digital / Finite Digital
3) Analog / Countably-Infinite Digital

Second, there must be some predicate O(a,b) defined over all the a and b
in the representation such that the predicate O(a,b) yields only one of
a finite set of symbols, S(i) (e.g. "True/False").  If such a predicate does
not exist, then the representation is arguably ambiguous and the symbols are
"meaningless".

An example of an O(a,b) is the equality predicate over the reals, integers,
etc.

Looking at all the (a,b) pairs that map the O(a,b) predicate into the
individual S(i), note that the following is true:
  ANALOG REPRESENTATION: the (a,b) pairs cannot be enumerated for ALL
    S(i).
  COUNTABLY-INFINITE DIGITAL REPRESENTATION: the (a,b) pairs cannot be 
    enumerated for ALL S(i).
  FINITE DIGITAL REPRESENTATION: all the (a,b) pairs for all the S(i)
    CAN be enumerated.

This distinguishes the finite digital representation from the other two 
representations.  I believe this is the distinction you were asking about.

The distinction between the analog representation and the countably-infinite
digital representation is harder to identify.  I sense it would require
the definition of a mapping M(a,b) onto the representation itself, and 
the study of how this mapping relates to the O(a,b) predicate.

That is, is there some relationship between O(?,?), M(?,?) and the (a,b)
that is analgous to divisibility in Z and R.  How this would be formulated
escapes me.


On your other-minds problem:
[see "Searle, Turing, Categories, Symbols"]

I think the issue here is related to the above classification.  In particular,
I think the point to be made is that we can characterize when something is
NOT intelligent, but are unable to define when it is.

A less controversial issue would be to "Define chaos".  Any attempt to do so
would give it a fixed structure, and therefore order.  Thus, we can only
define chaos in terms of what it isn't, i.e. "Chaos is anything that cannot
be categorized."

Thus, it is the quality that is lost when a signal is digitized to either a
finite or an countably-infinite digital representation.

Analog representations would not suffer this loss of chaos.

Carrying this thought back to "intelligence," intelligence is the quality that
is lost when the behavior is categorized among a set of values.  Thus, to 
detect intelligence, you must use analog representations ( and
meta-representations).  And I am forced to conclude that the Turing test must
always be inadequate in assessing intelligence, and that you need to be an
intelligent being to *know* an intelligent being when you see one!!!

Of course, there is much error in categorizations like this, so in the *real*
world, a countably-infinite digital representation might be *O.K.*.

I wholy agree with your arguement for a basing of symbols on observables,
and would also argue that semantic content is purely a result of a rich
syntactic structure with only a few primitive predicates, such as set
relations, ordering relations, etc.

Thinking about it further, I would argue, in view of what I just said, that
people are by construction only "faking" intelligence, and that we have
achieved a complexity whereby we can percieve *some*  of the chaos left
by our crude categorizations (perhaps through multiple categorizations of
the same phenomena), and that this perception itself gives us the appearance
of intelligence.  Our perceptions reveal only the tip of the chaotic iceberg,
however, by definition.  To have true intelligence would require the perception
of *ALL* the chaos.

I hope you found this entertaining, and am anxious to hear your response.

Mitch Sundt The MITRE Corp. sundt@mitre.arpa