[net.lang.c] C needs BCD

craig@x.UUCP (Craig Lund) (10/19/84)

One of the biggest problems with C is the lack of a BCD
(Binary Coded Decimal) arithmetic type.

When I needed to write software to handle
a real-time network of automated banking machines (ATM's),
C was the best choice for an implimentation language.

Unfortunately, C does not give a programmer access to the BCD machine
instructions offered by most modern computers/microprocessors.
The need to drop into assembly language to do BCD arithmetic
was the only annoying part of the entire ATM project.

Why should C provide floating point operations and not provide
BCD operations ? 

Craig Lund
Charles River Data Systems
(617) 626-1118
...!decvax!frog!craig

ron@brl-tgr.ARPA (Ron Natalie <ron>) (10/22/84)

Give me a break.  MARS NEEDS WOMEN, TOO!

The mode for adding random extensions to the languages that are of
interest to a small number of implementations or machines is to make
them look like functions.  This is OK, even if they aren't implemented
as functions.  I.e.

	a = bcdadd(b, 10);

Could produce equally good code as
	a = b bcdaddopt 10;

If bcdadd was recognized by the compiler/loader and switched to inline
code.  We do exactly that for the memory synchronization primitives on
the HEP C compiler.  It allows new things to be added without breaking
the syntax of the language.

If you want the other case, use ada.

-Ron

Let's keep C, C.

geoff@desint.UUCP (Geoff Kuenning) (10/22/84)

Craig Lund writes:

>Why should C provide floating point operations and not provide
>BCD operations ? 

Well, C is a systems programming language.  That produces a couple of effects:

    (1)	In the first place (let's be real about this) systems programmers
	tend to be snobs regarding business software.  Obviously, anyone
	who writes software for ATM's is obviously a wimp, not a real
	programmer.  :-)  (Note:  lest anyone take me too seriously,
	despite the preceding sideways smile, let me note that in my
	experience the majority of the software for a device like an ATM is
	real-time control and communications stuff, not adding columns of
	PICTURE-described numbers).

    (2) Disregarding the first point, there is very little need for a BCD
	data type in *systems* programming (in the traditional sense of
	writing a general-purpose timesharing or batch OS for a large
	computer).  Even floating point is essentially unused in the
	kernel.  (On the Callan [Unisoft] kernel, I once fgrepped for
	'float' and 'double'.  If I remember correctly, "Double" appeared
	once, in a comment about indirect blocks!  'float' appeared only in
	code relating to our hardware floating-point support.)

These are not very good reasons.  I suspect that, if you put BCD support into
the language, a bunch of people would find it useful for stuff that takes more
than 32 bits to represent, but needs more accuracy than floating point.  (An
example would be a time represented in microseconds or milliseconds since
((time_t) 0), rather than in seconds.)

C was invented on the PDP-11, which has no BCD support, but does have floating
point.  At that time, only IBM 360's (not 370's yet!) and similar
business-oriented machines had floating point;  I suspect it never crossed
the original implementors' minds to install BCD.

I vote for discussion of two questions by the net:

    (1) Is it appropriate to put a feature into the language such as BCD,
	which is going to have to be implemented as an inefficient subroutine
	package on at least *some* hardwares.  [hardwares?  is that a word?
	somebody please slap my wrists].  Or should it be a standard
	subroutine package, like strings(3), that is often implemented with
	hardware instructions?
    (2) If we are going to hang BCD onto the language, what is a decent
	syntax, keeping in mind that this is basically an array type like
	strings?

Please redirect flames along the lines of "BCD is not useful" or "BCD is
for wimps" to /dev/null.  If you can't see the need for efficient increased
precision, I'm not really interested in your opinion.  (Flames along the
lines of "it's a wart on the language", etc., are gladly accepted--it's
wintery even here in Southern CA).
-- 
	Geoff Kuenning
	First Systems Corporation
	...!ihnp4!trwrb!desint!geoff

henry@utzoo.UUCP (Henry Spencer) (10/22/84)

> Unfortunately, C does not give a programmer access to the BCD machine
> instructions offered by most modern computers/microprocessors.

It doesn't give you access to lots of other interesting instructions,
either.  There's no way to exploit string instructions in C, for example,
without either a *smart* optimizing compiler or an interface that at
least looks like a function call.  Strings strike me as a much bigger
issue than BCD arithmetic, and they have many of the same problems, too;
if C isn't going to do anything sexy about strings [for the record, I
am opposed to any attempt to do so, on the grounds that it's too big a
change and too difficult], then there really isn't any reason to single
out BCD for special treatment.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/23/84)

> The mode for adding random extensions to the languages that are of
> interest to a small number of implementations or machines is to make
> them look like functions.  This is OK, even if they aren't implemented
> as functions.  I.e.
> 
> 	a = bcdadd(b, 10);
> 
> Could produce equally good code as
> 	a = b bcdaddopt 10;
> 
> If bcdadd was recognized by the compiler/loader and switched to inline
> code.  We do exactly that for the memory synchronization primitives on
> the HEP C compiler.  It allows new things to be added without breaking
> the syntax of the language.

Quite right!  Same as the reason for not having string operators
built into the language.

jab@uokvax.UUCP (10/25/84)

/***** uokvax:net.lang.c / x!craig /  6:31 pm  Oct 22, 1984 */
Unfortunately, C does not give a programmer access to the BCD machine
instructions offered by most modern computers/microprocessors.

Why should C provide floating point operations and not provide
BCD operations ? 
/* ---------- */
Pardon me while I bury my head in the sand.

C doesn't provide BCD for the same reasons that it doesn't provide FULL
floating-point (note that "float" is treated like a step-child, while
"double" is not). It's something that the people designing the language
simply didn't need to get their jobs done, I suspect (systems programming
is simply NOT DP programming), and its addition would have made the
compiler larger.

When the first compiler for C was written, they were talking 64K address
space, on good days.

	Jeff Bowles
	Lisle, IL

bsa@ncoast.UUCP (Brandon Allbery) (10/25/84)

Strange how net subjects hit you in unexpected places.  Just after reading
the request for C BCD, I was trying to compile a program which had somehow
become garbled.  One set of error messages it produced was:

... newline in BCD constant
... BCD constant too long
... gcos BCD constant illegal

The source code?

				offset -=`BLKSIZE;

(I *said* the code was garbled.)  I get the feeling MS foisted yet *another*
undocumented feature on us.  The question is, is this one like the IOCTL
calls for multiplex files and for the Berkeley `NTTY' discipline (i.e.
they're there but don't do a (@) thing)?  And why `gcos'???  This is
supposed to be *Microsoft* :-)

--bsa

nazgul@apollo.uucp (Kee Hinckley) (10/28/84)

* >O-<      (the early bird gets the bug?)
    \

I think implementing BCD like strings would NOT be a good idea.  People
who use it will want to use it just like any other type, and I don't think
that's unreasonable.  It is certainly something that I can see it being
worth-while to support from the standpoint of spreading the use of C.
My only reservation is that I've never used BCD in my life and hope I
never shall, but that's hardly an argument against it.


                                            -kee
                              ...decvax!wivax!apollo!nazgul

Even my Apple has BCD.
Sure, why not.

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/29/84)

> 				offset -=`BLKSIZE;

The Reiser cpp had a GCOS hook for BCD constants like `xyzzy`.
I don't know why this is enabled in the cpp you have.

henry@utzoo.UUCP (Henry Spencer) (10/30/84)

> I think implementing BCD like strings would NOT be a good idea.  People
> who use it will want to use it just like any other type, and I don't think
> that's unreasonable.

People who use strings would like to use them just like any other type,
and this is an eminently reasonable request.  The point is, it's hard.
How long are strings?  How are they allocated?  What happens when they
overflow?  How can you define an adequate set of string operators and
still keep the complexity down?  Most of these questions apply, with
equal strength, to BCD.  Remember, "BCD" does *not* denote a single type;
BCD of *what* *length*?  Building in a whole family of numeric types
is not a decision to be taken lightly; that way lies Ada...
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

dave@inset.UUCP (Dave Lukes) (10/31/84)

<<<FFFLLLAAAMMMEEE   OOONNN!!!!!!>>>

_W_A_R_N_I_N_G_:
the hacker-general has determined that this flame is:
EXTREMELY _A_B_U_S_I_V_E and _S_A_R_C_A_S_T_I_C_>

PART 1:			ADDING MUCK TO C
			================

I am SICK AND TIRED of people saying:
	1)	C needs complex
	2)	C needs BCD
	3)	C needs proper strings
	4)	C needs built-in graphics handling
	5)	C needs a special 3-byte ones-complement integer type to
		handle some funny code I wrote once which won't port easily
		(nearly joke)
	6)	Etc. etc. etc.

All these things are:
	1)	Possibly true for certain uses of the language
	2)	Certainly  untrue for most uses of the language
	3)	Absolutely GARBAGE and IRRELEVANT

	Now, I live in an uncivilised and deprived continent where we can't even
spell words like colour and odour properly, and yet even I seem to have
heard of some work done at an obscure research establishment (called,
if I recall correctly, AT&T Bell Laboratories) on an enhanced version of C,
called C++.

The most interesting features of the language are:
	* user-defined types and with their own operators
	* inline functions
	* argument checking and coercion (overridable) for all functions
	* function overloading (e.g. no more fabs(f), abs(i) nonsense)
	* functions with optional trailing arguments
	* compatibility with C to roughly the same extent as the draft ANSI C
	* The C++ ``compiler'' can generate ``old C'' if required,
	  thus making re-implementation trivial.

C++ CAN do all the silly piddling little things people are always
complaining about (see [2]) ALREADY (plus a lot more).


			Re ANSI C etc.
			--------------

To Quote Robert Heinlein:
	``A committee is an animal with lots of legs and no brain''.

DISCLAIMER: One of my bosses (Mike Banahan) is on the committee,
so make what you will of this abuse.

I personally don't give a f*ck what the ANSI w*nkers do:
as far as I have been able to determine, this particular committee has
managed to do even less good than all the other committees on this
deity-forsaken mudball.

The ONLY useful thing to come out of the ANSI stuff is to make the float/double
coercion optional (WOW !! and it's ONLY taken them a YEAR).

			Re the Marriage of ANSI and ++
			------------------------------

Apparently: Stroustrup told the committee about the stuff he was doing,
and they (surprise, surprise) totally ignored it !!!

<<<FFFLLLAAAMMMEEE   OOOFFFFFF>>>

		Yours in frustration,
			Dave Lukes***.

*** Stupidity is a trade/service mark of language standardisation committees.

[1]	The C++ Programming Language - Reference Manual, Bjarne Stroustrup.
	AT&T Bell Laboratories, Computer Science Technical Report # 108.
[2]	Data Abstraction in C, Bjarne Stroustrup.
	AT&T Bell Laboratories, Computer Science Technical Report # 108.

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (11/02/84)

> I think implementing BCD like strings would NOT be a good idea.  People
> who use it will want to use it just like any other type, and I don't think
> that's unreasonable.  It is certainly something that I can see it being
> worth-while to support from the standpoint of spreading the use of C.

The same argument could be made for adding geometric, alegbraic, database,
and other primitive data types to the language.  Since one can invoke
functions (or even use a preprocessor) to extend the language as needed
for different application areas, these facilities are in principle already
supported by C, although not in a standard way across systems unless one
ports his own libraries (which is what I recommend).  It would be nice to
have standard libraries for these things widely available, but people are
getting stingy about sharing their code if it has commercial value.

C, like UNIX, was designed to offer powerful, general access to system
facilities rather than to try to directly address every application area.
Let's keep C a "high-level assembler" and fix its problems without trying
to make it a "rich" language like PL/I or (to some extent) ADA.

paul@ISM780B.UUCP (11/03/84)

> One of the biggest problems with C is the lack of a BCD
> (Binary Coded Decimal) arithmetic type.

PL/I has already been invented.  It has 11 million different flavors
of arithmetic.  Why didn't you use it?  Too slow?  Not available?
Think about why that might be.

> Why should C provide floating point operations and not provide
> BCD operations ?

Right.  So get rid of floating point :-).

~Ok, so it's a flame.  At least it's a short flame...~

-- Paul Perkins

jim@ISM780B.UUCP (11/03/84)

There is BCD constant support in Reiser's cpp, #ifdef gcos.
This is Bell's fault, not Microsoft's.  I guess they decided
"since BCD constants are not supported but they show up in gcos programs,
we had better complain about them".  So much for obeying language specs.

-- Jim Balter, INTERACTIVE Systems (ima!jim)

henry@utzoo.UUCP (Henry Spencer) (11/04/84)

> Apparently: Stroustrup told the committee about the stuff he was doing,
> and they (surprise, surprise) totally ignored it !!!

I think this is being a bit unfair to the committee.  My understanding
is that they spent some effort coming to the conclusion that C++ is not
C, and that their mission in life is to standardize C, not C++.  This
is not a cop-out, it's a decision to limit objectives to keep the project
manageable and the result recognizeable.

C++ is an interesting language, it's had significant success on large
projects within AT&T, and with any luck it might even get released so
the rest of us can use it.  But it's not C.  The incompatibilities are
quite minor, but there are enough additions to make it a distinctively
different language.

This issue got discussed on the net a little while ago.  Shortly after
I made some comments along the same lines, I got a letter from Stroustrup
himself, which said in part:

> ... You are clearly right, ANSI standardization is not for new languages,
> 	and C++ is not ready for ANSI. ...
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

res@ihuxn.UUCP (Rich Strebendt) (11/06/84)

In response to:

| > One of the biggest problems with C is the lack of a BCD
| > (Binary Coded Decimal) arithmetic type.
| 
| PL/I has already been invented.  It has 11 million different flavors
| of arithmetic.  Why didn't you use it?  Too slow?  Not available?
| Think about why that might be.

Why does the author of this question think that might be?  I would be
willing to lay odds that he is dead wrong!!!  As to why I think so,
continue to my comments below:

| > Why should C provide floating point operations and not provide
| > BCD operations ?
| 
| Right.  So get rid of floating point :-).

I would like to take this last remark more seriously than the author
intended.  Let us assume that we had to choose between FP and BCD.  How
should we choose TO SERVE THE LARGEST NUMBER OF CUSTOMERS.  Back in
1975 a side effect of some work I was doing was discovery that a large
fraction of the programs being written were business programs written
in COBOL.  The fraction at that time was about 0.7 -- right, 70% of the
programs being written were being done in COBOL.  Why?  In large part,
said some business DP friends of mine, because COBOL works in decimal
numbers and does not screw up accounting and other business programs
(such as your payroll processing) by accumulating binary round-off
errors until pennies just disappear or magically appear driving the
accountants crazy.  I do not know what the current fraction of COBOL to
all other programs, but I suspect that it has not changed too
drastically in the past decade.  Compare that with the small number of
numerical processing customers requiring FP (or thinking that they do
because they have not learned to scale integers as in FORTH :-) ).
Why, then, has PL/I not caught on?  Because most businesses tend to be
conservative in their approach to matters that can hurt the bottom line
(COBOL programs are doing the job, why take the risk of switching to
another language that will cost the company money to train people to
use it, and represents some measure of risk?  Besides, we have a number
of people already trained in COBOL and can hire lots of others if we 
need them).

I will agree that PL/I suffered from trying to be all things to all
people and ended up doing none of them superlatively well, but that was
not why it did not catch on in the business DP community.  

Sooooo, if we have to decide between FP and BCD, the smart money would
be betting on BCD.  FP is of academic interest!


					Rich Strebendt
					...!ihnp4!ihuxn!res

henry@utzoo.UUCP (Henry Spencer) (11/06/84)

> ...  Let us assume that we had to choose between FP and BCD.  How
> should we choose TO SERVE THE LARGEST NUMBER OF CUSTOMERS.  Back in
> 1975 a side effect of some work I was doing was discovery that a large
> fraction of the programs being written were business programs written
> in COBOL.  The fraction at that time was about 0.7 -- right, 70% of the
> programs being written were being done in COBOL.  Why?  In large part,
> said some business DP friends of mine, because COBOL works in decimal
> numbers and does not screw up accounting and other business programs
> (such as your payroll processing) by accumulating binary round-off
> errors until pennies just disappear or magically appear driving the
> accountants crazy.

Your business DP friends are making a standard mistake of the ignorant:
confusing the fixed-vs-floating-point distinction with binary-vs-decimal.
Last I heard, there is *NOTHING* in the specs for Cobol which says that
the numbers have to be implemented in decimal, although they have to
*look* that way (to some extent) to the programmers.  It is obvious that
dollars-and-cents people cannot tolerate approximate arithmetic, i.e.
floating-point.  But binary fixed-point is no less precise than decimal
fixed-point, and it's generally faster.  In the 60's, Burroughs used
binary arithmetic for the Cobol compilers on their 5500/6500 series; I
believe there were some problems with the details of how they did things,
but the basic idea worked just fine.
-- 
"BCD arithmetic is an idea whose time has come... and gone."

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

peters@cubsvax.UUCP (Peter S. Shenkin) (11/07/84)

I don't know about a BCD type, but it would be nice to have a %b
input and output format for use in printf and scanf.

{philabs,cmcl2!rocky2}!cubsvax!peters            Dr. Peter S. Shenkin 
Dept of Biol. Sci.;  Columbia Univ.;  New York, N. Y.  10027;  212-280-5517

geoff@desint.UUCP (Geoff Kuenning) (11/08/84)

In article <869@ihuxn.UUCP> Rich Strebendt writes:

>> PL/I has already been invented.  It has 11 million different flavors
>> of arithmetic.  Why didn't you use it?  Too slow?  Not available?
>> Think about why that might be.
>
>Why does the author of this question think that might be?  I would be
>willing to lay odds that he is dead wrong!!!  As to why I think so,
>continue to my comments below:

[A discussion of the well-known fact that 70% of all software is written in
COBOL follows].

>I will agree that PL/I suffered from trying to be all things to all
>people and ended up doing none of them superlatively well, but that was
>not why it did not catch on in the business DP community.  

Harumph.  Certainly conservatism had something to do with it.  But PL/I
was, and is, an amazing pig compared to COBOL, which is hardly efficient.
When you are buying from IBM, that translates into millions of dollars of
extra hardware.

>Sooooo, if we have to decide between FP and BCD, the smart money would
>be betting on BCD.  FP is of academic interest!

Harumph again.  BCD is necessary for the business community.  C, in case you
hadn't noticed, is not targeted at business programming.  Just because 70%
of all software is written for business (more accurately, 70% of all lines
of code--COBOL is a pretty high-line-count language) doesn't mean a
language intended for systems programming should implement BCD!  The
interesting question is how often you need each data type, and what the
expense is of doing it in software.  Systems programs and utilities
occasionally need extended precision and range, but they do not have
problems with binary/decimal conversion.  BCD exists *not* because it is
the only way to get extended precision, but because a business program that
did decimal/binary/decimal conversion during its processing would be a lot
slower.  This is not true of systems software.  Software floating point, on
the other hand, is *excruciatingly* slow.
-- 

	Geoff Kuenning
	First Systems Corporation
	...!ihnp4!trwrb!desint!geoff

geoff@desint.UUCP (Geoff Kuenning) (11/08/84)

In article <162@inset.UUCP> dave@inset.UUCP (Dave Lukes) writes:

>The ONLY useful thing to come out of the ANSI stuff is to make the float/double
>coercion optional (WOW !! and it's ONLY taken them a YEAR).

Harumph.  The *most* useful thing to come out of the ANSI stuff (and not
the only one) is the standardization of the "volatile" storage class.

>Apparently: Stroustrup told the committee about the stuff he was doing,
>and they (surprise, surprise) totally ignored it !!!

Of course.  ANSI stands for the American National Standards Institute.  Not
the American National Programming Language Development Institute.  C++ is
still totally experimental, unavailable, and _s_h_o_u_l_d not be considered
in the development of a standard.  ANSI has never invented a significant
new feature in a programming language and it has no business doing so.
-- 

	Geoff Kuenning
	First Systems Corporation
	...!ihnp4!trwrb!desint!geoff

mwm@ea.UUCP (11/08/84)

/***** ea:net.lang.c / ihuxn!res /  8:48 pm  Nov  5, 1984 */
I would like to take this last remark more seriously than the author
intended.  Let us assume that we had to choose between FP and BCD.  How
should we choose TO SERVE THE LARGEST NUMBER OF CUSTOMERS.  Back in
1975 a side effect of some work I was doing was discovery that a large
fraction of the programs being written were business programs written
in COBOL.
					Rich Strebendt
					...!ihnp4!ihuxn!res
/* ---------- */

Rich, I think you are suffering a delusion about what languages are for.
Languages (usually, anyway) aren't designed and implemented to try to serve
as many "customers" as possible, they are designed (well, sometimes :-) and
implemented to fill a need; generally a need felt by *one* customer. ADA is
a perfect example, XLISP another.  If that language fills that need for
that customer then it is an unqualified success. If it happens to fill some
need (maybe not the same one!) of many other people as well, and gets
wide-spread use, then it we call it "successful".

I can't speak for Ritchie, but I use C because I need a high-level systems
programming languages. Given the history of C/Unix, I suspect that that is
what it was designed for originally. Though C has grown some since I first
started using it (v6), it still fills that need. Such usage calls for
pointers (dangerous beasts), a looseness about types (not all that safe in
a compiled language), and *lots* of integer arithmetic. It also calls for a
little fixed and/or floating point arithmetic. On the machines I use, it
*never* calls for BCD arithmetic.

C is not a be-all and end-all for programming languages. I've never
encountered a language that is, and never expect to. When I step outside
the systems programming environment, I change languages.  FORTRAN for
number crunching (C isn't suitable, due to the float/double problem, the
way it handles arrays, and a certain laxness about expressions), LISP for
symbol shuffling and as a programmable calculator, ICON for string work,
and CLU for general applications work. Of course, awk/sed/sh get used quite
a bit, too.

C does handle the systems work well, and has almost all the facilities I
need. There are some minor things I'd like to see added to improve it's
usefulness *in that area*. What I wouldn't like to see are extensions for
things outside that area (you need to choose a better tool, not remake one
that doesn't fit), or changes that make it more suitable for something
outside that area while making it less suitable for work inside the area.
Adding a BCD type fits in the first category, changing the array
subscripting scheme to look like FORTRAN falls into the second.

For the barb, just because a feature is highly used in one tool, doesn't
mean that it should be added to them all. By that argument, the first thing
that needs to be done to C is to make it compatible with the most popular
language on Unix, the Bourne Shell. I don't think anyone would advocate
that. :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

Now, just to help, I'd like to make a suggestion. If you *really* want BCD
in a C compiler, you can follow COBOL. Change *all* the integer types to
BCD in various sizes. Since most of those COBOL programs run on IBM
hardware, you could satisfy a large portion of your customers with one
compiler. Or you could put in real BIGNUMs instead of BCD. Just think, the
code would still compile and run under a normal C compiler.  Of course,
you've just destroyed C for systems work, so you might as well change the
array data type to fit your needs, add stricter type checking to make your
code safer, remove those dangerous pointers, replace all the cryptic magic
cookies with real words, etc. Trouble is, what you end up with isn't C,
it's a language designed to meet your needs.

Maybe it would be better to start from scratch, and build a modern language
that fits the needs of the DP community? That would seem to be an
intuitively better route than trying to bend a systems tool to those needs.

	<mike

kpmartin@watmath.UUCP (Kevin Martin) (11/09/84)

>I don't know about a BCD type, but it would be nice to have a %b
>input and output format for use in printf and scanf.
>{philabs,cmcl2!rocky2}!cubsvax!peters            Dr. Peter S. Shenkin 


The thing I find amusing is that, when this particular discussion started
with the mention of BCD, I immediately thought of the funny 6-bit character
set that some Honeywell mainframes use (6 of these fit in a 36-bit word,
giving 6-char monocase externals, but that is another story). Reading on,
I determined that the author was discussing packed decimal.

The Bell labs GCOS C compiler (and the B language, C's ancestor) have the
construct `abcdef` to generate a BCD character constant, up to six characters.
The 'printf' in the B library has a %b format which prints strings of BCD
characters.

So the question arises: Does a '%b' format mean:
Packed decimal
BCD character set (or another alternative set, as appropriate)
Binary (base 2)
   ?
               Kevin Martin, UofW Software Development Group

breuel@harvard.ARPA (Thomas M. Breuel) (11/09/84)

I fail to see the need for bcd arithmetic even in business computing:
-- bcd numbers use more storage: an 18 digit number takes up 9 bytes,
   whereas the corresponding binary representation takes 8 bytes.
-- bcd arithmetic is slower than binary arithmetic: even with special
   bcd hardware, still more memory accesses are required to get all
   the data into the alu and to write out the result.
-- bcd arithmetic may be 'easier' to convert to ascii strings, but I doubt
   that unpacking a bcd number is significantly faster than converting
   a binary number to an ascii string. Even if it were, it would
   probably be balanced by (2).
If you really want to add a new data type to 'C' then 'veryLong' (i.e.
64bit integers) might be a lot more useful than 'bcd'.

						Thomas.
						(breuel@harvard)

PS: if you are into commercial computing, why don't you use COBOL?
After all, probably >95% of all business software is written in it...

karsh@geowhiz.UUCP (Bruce Karsh) (11/10/84)

> If you really want to add a new data type to 'C' then 'veryLong' (i.e.
> 64bit integers) might be a lot more useful than 'bcd'.

I think that a veryLong data type would be a good addition to C.  I don't
understand where the idea that 32 bits is enough came from anyway.  Perhaps
it is a leftover from the days when memory was so expensive that using up
more than 32 bits for an integer was prohibitively expensive.

There is a long list of integer quantities which are commonly used and
won't fit into 32 bits.  Two common examples are financial quantities
(greater than $20Million), and time expressed to a resolution of 
1 microsecond for a year.

I like the increased accuracy and reliability of working with integer
representations instead of floating point representations.  I'd like
to see more language support for this.
-- 
Bruce Karsh                                        ---------------------------
Univ. Wisconsin Dept. of Geology and Geophysics    |                         |
1215 W Dayton, Madison, WI 53706                   |   THIS SPACE FOR RENT   |
(608) 262-1697                                     |                         |
{ihnp4,seismo}!uwvax!geowhiz!karsh                 ---------------------------

greenber@acf4.UUCP (11/10/84)

<>

Isn't that what "double long" is for??

tom@uwai.UUCP (11/11/84)

> Isn't that what "double long" is for??

I don't think so... on my system, this program:

-->   main()
-->   {
-->     double long foo;
-->
-->    for (foo=2; ; foo *= foo)
-->        printf("%ld\n",foo);
-->   }

generates this when compiled:

--> % make foo
--> cc  foo.c  -o foo
--> "foo.c", line 3: illegal type combination
--> *** Error code 1
--> 
--> Stop.
--> %

I vote for 64-bit integers.  BCD is for COBOL and other antequated
dinosaurs.

-- 

Tom Christiansen
University of Wisconsin
Computer Science Systems Lab 
...!{allegra,heurikon,ihnp4,seismo,uwm-evax}!uwvax!tom
tom@wisc-crys.arpa

GEACC022%TIMEVX%CITHEX@lbl.arpa (11/23/84)

Received: from timevx by cithex with DECNET ; Fri, 23 Nov 84 03:21:28 PST
Date:     Fri, 23 Nov 84 03:22:27 PST
From:     geacc022 (gary e. ansok) @ timevx
Message-Id: <841123032155.001@timevx>
Subject:  Re: C needs BCD
To:       info-c @ BRL-TGR.ARPA
 
][ is possibly a trademark of the Apple people
 
>> 
>>     space
>> 	There are a few cases where BCD actually uses less space than
>> 	simple binary: on most machines, +/-99,999 requires you to go
>> 	from 16 to 32 bits, but it fits in 6 bytes of BCD.
>>
>
>Uh, 6 bytes equals 48 bits on most machines.  
 
    I think he meant 6 nibbles = 3 bytes = 24 bits.
 
-----
All the world loves a straight man.
 
Gary Ansok
geacc022%timevx%cithex @ lbl-g.arpa

kay@flame.UUCP (Kay Dekker) (11/23/84)

<DDT>

What do people think of Bjarne Stroustrup's C++?

						Kay.
-- 
"Serendipity: finding something useful on the net"

			... mcvax!ukc!qtlon!flame!ubu!kay