[comp.arch] non-binary hardware

msb@sq.uucp (Mark Brader) (09/14/88)

[1] The word is "ternary", not "trinary".   If you want to use the "tri-"
    prefix, you have to say "triadic".  Of course, as with all things in
    language, this is subject to change if "trinary" becomes sufficiently
    popular; the words for bases 8 and 16 both changed when they became popular
    with computers.  (See Knuth, volume 2, sec 4.1; in 1st edition, p. 168.)
    But "ternary" is the accepted form.

[2] Please move this discussion out of comp.lang.c.  Discussion of non-
    binary machines should probably go in comp.arch (I'm cross-posting to
    there and directing followups to it).  If discussion of terminology
    occurs, it should probably go to sci.lang.  Sci.math is another possible
    group for some aspects.

[3] While I'm posting, I may as well point out that the ENIAC calculator
    (or computer, depending on your definition; it was plugboard-programmed)
    of 1945 used decimal, non-BCD arithmetic, but the underlying implement-
    ation was binary; each of the 10 digits in each of its 20 registers
    contained 10 flip-flops each containing 2 vacuum tubes!

Mark Brader			"You can't [compare] computer memory and recall
SoftQuad Inc., Toronto		 with human memory and recall.  It's comparing
utzoo!sq!msb, msb@sq.com	 apples and bicycles."		-- Ed Knowles

johns@calvin.EE.CORNELL.EDU (John Sahr) (09/14/88)

In article <5718@utah-cs.UUCP> u-dmfloy%sunset.utah.edu.UUCP@utah-cs.UUCP (Daniel M Floyd) writes:
>For example:
>
>Trinary 'AND':
>
>    0 1 2              0 1 2
>=========          =========
>0 | 0 0 0          0 | 0 0 0
>1 | 0 1 2          1 | 0 1 2
>2 | 0 2 2          2 | 0 2 1
>

There is at least one more possibility, namely

    0 1 2
=========
0 | F F F
1 | F T T
2 | F T T

The issue is, basically, what do you mean by AND?  What I've written is
a truth table for a logical AND, which is basically how C works.  D M Floyd
has proposed operations which are "bit wise" or finite field binary
operations, depending upon your point of view.

Computing gear which had n-stable states, n > 2, would be pretty fascinating
stuff to play with, and might even have some nice practical applications.
n-ary systems, with n odd, would be useful for expressing some of the radar
signals that I deal with in my work.

I'll believe it when I see it (n-ary logic), however.
-- 
John Sahr,                          School of Elect. Eng.,  Upson Hall   
                                    Cornell University, Ithaca, NY 14853

ARPA: johns@calvin.ee.cornell.edu; UUCP: {rochester,cmcl2}!cornell!calvin!johns

lindsay@k.gp.cs.cmu.edu (Donald Lindsay) (09/16/88)

In article <655@calvin.EE.CORNELL.EDU> johns@calvin.ee.cornell.edu.UUCP (PUT YOUR NAME HERE) writes:
>Computing gear which had n-stable states, n > 2, would be pretty fascinating
>stuff to play with, and might even have some nice practical applications.
>n-ary systems, with n odd, would be useful for expressing some of the radar
>signals that I deal with in my work.
>I'll believe it when I see it (n-ary logic), however.

There have been conferences on "multiple valued logic" for years.

The dream is (was?) that this could give more functionality per wire.  For
example, a 4-valued logic would allow a 32 bit address to fit on 16 wires
instead of the usual 32 wires. Clearly, we would be looking to build
systems that were denser, more parallel, lower powered, physically smaller,
et cetera.

It is actually fairly easy to build this stuff. However, it doesn't confer
any new computational ability - after all, binary computers can be
persuaded to perform base-10 arithmetic. Also, the circuitry has a cost.

Note that a binary signal which is nominally represented by
	false == 0 volts
	true  == 5 volts
is more truthfully
	false == under X volts
	true  == over X volts
where X is a threshold voltage, somewhere in between 0 and 5. Electrical
problems, such as noise, can cause a "false" to be incorrectly taken as
"true", or vice versa. An N-valued logic has N-1 of these thresholds, and
the N different nominal levels are necessarily nearer together, and
therefore more easily got wrong. The resulting circuit must be run a little
slower, or must use more precise transmitters and receivers, or must use
higher voltages/currents/whathaveyou. This causes costs, which balance
against the wins.

In fact, Intel has used 4-valued ROM cells in several products. Instead of
storing 1 as "transistor", and 0 as "no transistor", the palette was "no
transistor", "small transistor", "medium transistor", and "large
transistor". Unfortunately, the cell size had to be large enough to hold
the large transistor, whereas it would otherwise have only had to be large
enough to hold the small transistor. However, you only needed half the
cells, so the overall ROM came out smaller.

I don't believe that any recent ROMs are built this way. I'm not sure why,
but it might be the chip yield, and it might be the speed of the sense
amplifiers.

-- 
Don		lindsay@k.gp.cs.cmu.edu    CMU Computer Science

baum@Apple.COM (Allen J. Baum) (09/16/88)

[]
>In article <2997@pt.cs.cmu.edu> lindsay@k.gp.cs.cmu.edu (Donald Lindsay) writes:
>
>There have been conferences on "multiple valued logic" for years.
>

There is some sort of proof, which I've seen, but don't remember details of,
that the optimal base for computation is e (2.71828...). The proof wasn't
very complicated, and I've forgotten what the measure is of 'best'. Anyway,
bay in the 50's and early 60's, it was believed by some that 3 was closer to 
'e' than 2, so the up and built some base 3 computers. There are some articles
about techniques in some of the very early 'Computer Design' magazines c.1960.

One of the techniques used was transformer coupled phase logic (called
parametrons??: Logic 'level's were not some voltage level, but the phase of an 
AC signal (in 120 degree increments). You could get some interesting logic
functions just by how you wired up a transformer (two input windings, one
output winding). I think the Russians may have built a beast like this.

Well, what goes around comes around. Someone recently posted a note in this
group about a patent filed by Seymour Cray recently, which is a GaAs gate
structure using 'phase' logic. (who says history is useless?)
--
{decwrl,hplabs}!nsc!baum@apple.com		(408)973-3385

shankar@srcsip.UUCP (Subash Shankar) (09/16/88)

In article <17234@apple.Apple.COM> baum@apple.UUCP (Allen Baum) writes:
>
> [about multi-valued logic]
>There is some sort of proof, which I've seen, but don't remember details of,
>that the optimal base for computation is e (2.71828...). The proof wasn't
>very complicated, and I've forgotten what the measure is of 'best'. Anyway,
>bay in the 50's and early 60's, it was believed by some that 3 was closer to 
>'e' than 2, so the up and built some base 3 computers. 

The proof is as follows:

  Assume that hardware cost per digit is linearly proportional to the radix. 
    C = k1 * r   (C is cost per digit, r is radix)
  We know that the number of digits to represent a number, m, in radix r is
  approximately log(m) (base r) = ln(m)/ln(r)
    n = ln(m)/ln(r)  (n is number of digits, m is largest number (constant))
  So, the total cost is:
    Cn = k1 * r * ln(m) / ln(r)
  Differentiating (to maximize) with respect to r results in
                 ln(r)-1
    k1 * ln(m) * -------
                 ln(r)^2
  Setting equal to 0 and solving results in
    ln(r)=1   ==>  r=e

There is a serious flaw in this proof - it is not at all clear that the
hardware cost per digit is linearly proportional to the radix; hence the
controversy over the "optimal" radix remains an open question.  A second
flaw is that the above shows optimal radices for functional components such
as ALU's, but with interconnect being more important on chips, multi-valued
logic may be of great use.

eric@snark.UUCP (Eric S. Raymond) (09/17/88)

In article <17234@apple.apple.com>, baum@apple.UUCP (Allen Baum) writes:
> There is some sort of proof, which I've seen, but don't remember details of,
> that the optimal base for computation is e (2.71828...). The proof wasn't
> very complicated, and I've forgotten what the measure is of 'best'.

Would someone who is familiar with this result please post a summary? I too
have heard of it as a 'folk theorem', without description of the measure
with respect to which e is 'best' (though I have some vague idea that it
has something to do with 'entropy' in Shannon's information-theoretic sense;
I'm not sure whether this is a genuine memory of the explanation or something
I conjectured afterwards).

I've never been able to find a reference for this and it's been bugging me for
years. I've been on the point of posting about it several times, and Mr. Baum's
posting pushed me over that edge. Someone, please, lay this haunt to rest!
-- 
      Eric S. Raymond                     (the mad mastermind of TMN-Netnews)
      UUCP: ...!{uunet,att,rutgers}!snark!eric = eric@snark.UUCP
      Post: 22 S. Warren Avenue, Malvern, PA 19355      Phone: (215)-296-5718

baum@Apple.COM (Allen J. Baum) (09/20/88)

[]
>In article <8756@srcsip.UUCP> shankar@ely.UUCP (Subash Shankar) writes:
>The proof (of optimality of base-e hardware) is as follows:
>
>  Assume that hardware cost per digit is linearly proportional to the radix. 
> ..... (the proof)
>There is a serious flaw in this proof - it is not at all clear that the
>hardware cost per digit is linearly proportional to the radix; hence the
>controversy over the "optimal" radix remains an open question.  A second
>flaw is that the above shows optimal radices for functional components such
>as ALU's, but with interconnect being more important on chips, multi-valued
>logic may be of great use.

Agreed that the proof is flawed, but there are reasons that work both ways,
like signal-to-noise being worse, and settling times increasing. Anyway, a
book reference for the proof: "High Speed Computing Devices" by ERA Associates,
McGraw-Hil, 1950(!) pg 84. They talk about relays and triode logic a lot.

--
{decwrl,hplabs}!nsc!baum@apple.com		(408)973-3385