[comp.arch] Compatibility with EBCDIC

jlperkin@uokmax.UUCP (J Les Perkins) (01/01/70)

Keywords:keeping up


We have a unique problem that really hasn't been encountered before this 
era of computers. Six years ago someone said to me that in the past 20 years
we have gained more knowledge than mankind has since its beginning. And as 
computers get better and technology increases even faster our knowledge grows 
even more rapidly. And with all these advances it is increasingly difficult to
keep up. So companies are continually confronted with the decision of whether 
to upgrade or if it is economical to do so. When these companies, particularly
large institutions, do not upgrade they tend to prevent further advances in 
the field because "if the big institutions aren't going to upgrade why change"
attitudes start popping up. This prevents us from getting away from ancient
equipment designed in the '60's. I guess what I'm trying to say is we all need
to be more open to changes, because we definately have more changes ahead of us
than we have already seen with advances coming faster every day.

JLP
(Well, it was a thought.)

msf@amelia (Michael S. Fischbein) (08/24/87)

I shouldn't say anything about the addressing or stack capabilities of the
360, since I didn't work on one, but even the smaller IBMs used EBCDIC and
I get tired of what I consider unjustified slams against it.  (Particularly
since there are several justified slams :-) ).

In article <1044@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>ASCII was designed quite logically.

So was EBCDIC.  So, for that matter, was CDC's 6-12 bit code, baudot 5-bit,
and (hold this thought) 12-bit hollerith.

>There were a few other design thoughts that went into ASCII but I don't
>remember them; I seem to faintly remember there was another way of
>deriving a subset alphabet.  Certainly it wasn't put together
>haphazardly like EBCDIC was.

Shucks, and I thought significant thought went into EBCDIC, too.  Of course,
being so much older than ASCII, different considerations were paramount.

>  It's hard to defend a character set in
>which (x >= 'A' && x <= 'Z') doesn't assure us that x is alphabetic:
>this is plainly a very undesirable property of EBCDIC.

But if you use that as the test, then ASCII lower case letters are not
alphabetic!  Or maybe you meant ( x>='A' && a<='z'), in which case \, ^,
_, and ` are alphabetic.

>  On the other
>hand, perhaps we should count our blessings and be thankful that IBM
>chose to make the digits contiguous, so we don't have to use a lookup
>table to evaluate a string of digits.

Of course, this is a fairly logical necessity if one is to start from
Binary Coded Decimal :-).

Now, project yourself back into the early 60s.  (Yes, EBCDIC is older than
that, but I think this is adequate).  What is the major data storage code,
dating back to the 1900 census?  12-bit hollerith encoding.  Quick now,
using the electronic components available then, design a simple unit record
device that will take punched cards and print the alpha contents on them.
Or, look at a blank punched card (ie, no printing, just holes) and
translate the punches into ASCII, in your head.
Difficult feat of memory?  Well then, why switch from that convenient
EBCDIC?  You could do that translation as fast as you could write down the
characters.  Further, so could your card reader.  ASCII would have slowed
it down.

Now, in the 80s, does EBCDIC still make sense?  Well, not really, except
where continued compatability with 25 year old records and programs is
important, or where the extra costs of conversion can't be justified in terms
of possibly increased efficiency in sorting.  If you want to slam EBCDIC,
talk about 026 vs 029 incompatibilities and such; don't spout about `holes'
or sorting order.  Every one of the myriad character codes available has
some advantages and some disadvantages, none are perfect.  ASCII is the
best compromise mostly because it is the most widely accepted, not because
it has some magic in its design.

		mike

Michael Fischbein                 msf@prandtl.nas.nasa.gov
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.

john@geac.UUCP (John Henshaw) (09/14/87)

In article <666@uokmax.UUCP>, jlperkin@uokmax.UUCP (J Les Perkins) writes:
> 
>                  Six years ago someone said to me that in the past 20 years
>we have gained more knowledge than mankind has since its beginning. And as 
>computers get better and technology increases even faster our knowledge grows 
>even more rapidly. And with all these advances it is increasingly difficult to
>keep up.
>                                 I guess what I'm trying to say is we all need
>to be more open to changes, because we definately have more changes ahead of us
>than we have already seen with advances coming faster every day.
> 
>JLP

I think you saying "knowledge" when you should be saying "information".
It is clear that the current state-of-the-art in information gathering
is moving along nicely, but I will argue that we are not moving along in
the same way with respect to "knowledge". (In fact, we *could* be
regressing somewhat - due mainly to the displacement of "knowledge" by
"information" in our consciousness. Any comments?)

In fact, it seems that the big push in longer-term R&D is towards
"knowledge-based technologies", since we already have "information
technologies" today. I see the difference between to two concepts as:

information := facts
knowledge   := inferences based upon facts (information)

I don't see the massive gathering of information as "advances", but more
as the raw material for future possibilities. 

-john-
-- 
John Henshaw,			(mnetor, yetti, utgpu !geac!john)
Geac Computers Ltd.		"My back to the wall,
Markham, Ontario, Canada, eh?		 a victim of laughing chance..."