mjs@sfmag.UUCP (M.J.Shannon) (09/29/85)
> Then you can define > > float foo:16; > > if you really think you can do something useful with 16-bit floats. Someone > must use them for something... Just such a construct is used in the accounting software in many UNIX System kernels. It seems to suffice for the application. -- Marty Shannon UUCP: ihnp4!attunix!mjs Phone: +1 (201) 522 6063 Disclaimer: I speak for no one.
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/02/85)
> > Then you can define > > > > float foo:16; > > > > if you really think you can do something useful with 16-bit floats. Someone > > must use them for something... > > Just such a construct is used in the accounting software in many UNIX System > kernels. It seems to suffice for the application. Great, a violation of the C language spec in the kernel.
mash@mips.UUCP (John Mashey) (10/06/85)
> > > Then you can define > > > > > > float foo:16; > > > > > > if you really think you can do something useful with 16-bit floats. Someone > > > must use them for something... > > > > Just such a construct is used in the accounting software in many UNIX System > > kernels. It seems to suffice for the application. > > Great, a violation of the C language spec in the kernel. Not to worry. There's no float foo:16 definition per se; what's there is DMR's typedef ushort compt_t; followed by code that does arithmetic in longs, then packs the results into the comp_t, with 3-bit exponent and 13 bit mantissa. This was done because: a) You need more than 16 bits to represent the observed numbers. b) You want both good precsision for smaller numbers, for doing measurement studies, and at least gross precision for larger ones, for system accounting. Good short-float application. c) The process accounting system can generate much data; this was especially a concern on the PDP 11/70s current when this code was written; anything was worth keeping the size down. d) Using the comp_t code lets you keep the size of (struct acct) to 32 bytes. It is moderately helpful that this size be a power of 2 so that the struct never cross disk buffer boundaries. -- -john mashey UUCP: {decvax,ucbvax,ihnp4}!decwrl!mips!mash DDD: 415-960-1200 USPS: MIPS Computer Systems, 1330 Charleston Rd, Mtn View, CA 94043
mjs@sfmag.UUCP (M.J.Shannon) (10/09/85)
>>>Then you can define >>> >>>float foo:16; >>> >>>if you really think you can do something useful with 16-bit floats. Someone >>>must use them for something... >> >>Just such a construct is used in the accounting software in many UNIX System >>kernels. It seems to suffice for the application. > >Great, a violation of the C language spec in the kernel. Sorry, I didn't mean to imply that the syntactic construct is used, rather that a 16-bit quantity is used to represent some floating point values. I believe the exponent is 3 bits. -- Marty Shannon UUCP: ihnp4!attunix!mjs Phone: +1 (201) 522 6063 Disclaimer: I speak for no one.
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/14/85)
Earlier, I mentioned having read about a floating-point data representation that obtained increased dynamic range traded off against precision by using a variable number of bits for the exponent. I have been informed that Bob Morris holds a patent on this, which is called "tapered floating point" and was described in some IEEE publication over 10 years ago. Rummaging around in my notes, I discovered the article I had in mind, entitled "FOCUS Microcomputer Number System" by Albert D. Edgar & Samuel C. Lee, in the March 1979 issue of CACM, pp. 166-177. It turns out that FOCUS represents floating-point quantities as their base-2 logarithms using a fixed number of bits for the fractional part of the logarithm. For example, 8-bit FOCUS data interpretation is as follows: sign,excess-8_fractional_exponent meaning 1,1001.000 -2^1 = -2 1,0000.000 -2^(-8) ~= -0.004 0,0111.000 +2^-1 = 0.5 0,1000.000 +2^0 = 1 0,1000.101 +2^(5/8) ~= 1.5 0,1111.111 +2^(63/8) ~= 235 This scheme has the advantage of not needing to use any bits to specify the size of any field; otherwise it has similar characteristics to the scheme that trades exponent against mantissa: large dynamic range combined with higher relative precision for numbers near 1. The article claims that FOCUS software implementations run faster on the average than fixed-point operations (presumably because multiply/divide is cheap for FOCUS). Note that there is a jump around true 0, so some adaptation of algorithms may be needed to work well with FOCUS. For further information, read the article. This obviously doesn't belong in net.lang.c, but that's where the discussion started.