[net.arch] IBM and those upper 8 bits

johnl@ima.UUCP (03/16/85)

(Long but, I hope, interesting message follows.)
terak!doug thoughtfully came to IBM's defense saying that in 1964 a 24-bit 
address space seemed entirely adequate for the projected life of the 360.  
I think you can make a pretty persuasive case that even at the time they were 
shortsighted.  

The whole point of the 360 series was to define and architecture that would be 
implemented many times over its lifetime.  The initial plan had seven models, 
from model 30 to model 90, and they have probably reimplemented the 360 
architecture about 50 times by now.  It's true that they didn't expect that 
the architecture would last more than 10 years, but the danger signs were 
clearly there.  The model 65 was routinely delivered with a megabyte with 
several more meg of slow 8us core optional, and the 91 had 2mb of main 
memory.  So even at first they weren't that far from filling the address 
space.  Furthermore, the 360/67, which was a 65 with address translation 
phardware to allow paging, had 32 bit addressing in 1967, only three years 
after the 360 architecture was designed.  The main predecessor of the 360 
series, the 7094, had 32K words or roughly 128K bytes of main memory, and it 
was only 4 years old when the 360 was designed (though I admit its 
architecture dated back to about 1956.) One typically replaced a '94 with a 
model 65, a factor of 8 increase in memory vs.  a machine only 3 years old.  
You'd have to be pretty nearsighted not to expect the 360's address space to 
fill up after another few upgrade cycles, well before the 360's lifetime was 
expected to end.  

The IBM Systems Journal published some special issues on the then new 360 
series in 1964, and in them were some very enlightening.  There was an article 
on the design of the floating point unit and their choice of hex rather than 
binary floating point, in which they claimed that using hex sped up 
normalization and got more exponent range without losing precision.  
Unfortunately, their analysis was just plain wrong (and not in very subtle 
ways, either, send me mail if you want the details) and the 360's float format 
loses 3 bits of precision on every operation compared to a well- designed 
format such as the PDP-11's.  Whoops.

There was also an article on the addressing scheme, in which it looked to me 
like they sincerely believed that their base-displacement addressing scheme 
would get them the advantages of virtual memory without the cost of building a 
DAT box.  The 360, for those not familiar, computes addresses by adding a 12 
bit displacement in the instruction to a 24 bit base value from a register.  
Some instructions also allow a second register to be added at the same time.  
They deliberately chose the 12 bit offset to be so small that nobody would 
consider trying to write programs that used direct addresses (since that would 
limit you to 4K) so that all addresses are relative to values stored in 
registers; this is indeed how one writes any 360 program.  But then it appears 
that they thought that they could get the effect of dynamic relocation by 
merely changing the base values in the registers.  Well, that's true, sort of, 
except that there's no way to tell which registers contain pointers and which 
don't, since the registers are all the same, and in any event any real program 
has lots of pointers stored away which will later be loaded into registers and 
dereferenced.  Whoops again.  I think again it's the case of proof by lack of 
imagination ("I can't imagine why anybody would want to do that...  .") 

Finally, you can ask why the software wasn't written to be upgradable to 32 
bit addresses so at least it might run on the 67.  I hear that as the OS/360 
project ran along, people got increasingly upset about how large it was 
getting and programmers got brownie points depending on how small their stuff 
was.  Data structures in the user space rather than system space didn't count, 
which is why practically every data structure is indeed in the user space and 
it's up to you to allocate it, initialize it, and be careful not to clobber 
it.  Furthermore, the bureaucracy was so big and huge that it apparently took 
six months to get a new data structure approved by the data structure 
committee so that programmers took to whispering in the halls and agreeing to 
put a few bits they needed to communicate into some otherwise unused hole in 
an existing data structure so they could continue working.  Under those 
conditions, it's no surprise that the high-order byte of practically every 
address in every data structure in the entire system was stuffed full of bits, 
and that IBM now has a major mess on their hands going to 32 bit addressing.  

John Levine, ima!johnl 

herbie@watdcsu.UUCP (Herb Chong [DCS]) (03/18/85)

In article <511@ima.UUCP> johnl@ima.UUCP writes:
<long article preceeding>

>Under those 
>conditions, it's no surprise that the high-order byte of practically every 
>address in every data structure in the entire system was stuffed full of bits, 
>and that IBM now has a major mess on their hands going to 32 bit addressing.  
>
>John Levine, ima!johnl 

don't you mean 31-bit addressing?  the high order bit (sign bit for integers)
is still used for a flag when passing parameters.  they couldn't get away
from that because it is uses so pervasively by everything ever written for
IBM systems.  i have the XA announcement letter somewhere, but i know 
it's 31 bits.

Herb Chong...

I'm user-friendly -- I don't byte, I nybble....

UUCP:  {decvax|utzoo|ihnp4|allegra|clyde}!watmath!water!watdcsu!herbie
CSNET: herbie%watdcsu@waterloo.csnet
ARPA:  herbie%watdcsu%waterloo.csnet@csnet-relay.arpa
NETNORTH, BITNET, EARN: herbie@watdcs, herbie@watdcsu

henry@utzoo.UUCP (Henry Spencer) (03/19/85)

> terak!doug thoughtfully came to IBM's defense saying that in 1964 a 24-bit 
> address space seemed entirely adequate for the projected life of the 360.  
> I think you can make a pretty persuasive case that even at the time they were 
> shortsighted.  

It depends on which IBMers you are looking at.  If you ignore the software
(groan) then the only place where the 360 architecture is really tied to
24 bits is that its i/o channel architecture is limited to 24 bits.  The cpu
proper doesn't interfere (much) with the idea of leaving the top 8 bits
for future address expansion, but the i/o data structures don't have room.
Gene Amdahl has said that this was a deliberate tradeoff, not just a
matter of shortsightedness.  They recognized the potential future problem,
but short-term economic considerations in the i/o hardware were compelling.

> There was also an article on the addressing scheme, in which it looked to me 
> like they sincerely believed that their base-displacement addressing scheme 
> would get them the advantages of virtual memory without the cost of building a 
> DAT box.  ...
> ...  But then it appears 
> that they thought that they could get the effect of dynamic relocation by 
> merely changing the base values in the registers.  Well, that's true, sort of, 
> except that there's no way to tell which registers contain pointers and which 
> don't, since the registers are all the same, and in any event any real program 
> has lots of pointers stored away which will later be loaded into registers and 
> dereferenced.  Whoops again.  I think again it's the case of proof by lack of 
> imagination ("I can't imagine why anybody would want to do that...  .") 

Again, it depends on *which* IBM people you listen to.  Some of them
knew all along that this wouldn't work, and why, and said so.  The issue
of Annals of Computing a while ago that reprinted IBM's internal SPREAD
report also had a panel discussion among a bunch of the original 360
designers.  This issue, among others, came up.

> Finally, you can ask why the software wasn't written to be upgradable to 32 
> bit addresses so at least it might run on the 67.  I hear that as the OS/360 
> project ran along, people got increasingly upset about how large it was 
> getting and programmers got brownie points depending on how small their stuff 
> was.  Data structures in the user space rather than system space didn't count, 
> which is why practically every data structure is indeed in the user space and 
> it's up to you to allocate it, initialize it, and be careful not to clobber 
> it.  Furthermore, the bureaucracy was so big and huge that it apparently took 
> six months to get a new data structure approved by the data structure 
> committee so that programmers took to whispering in the halls and agreeing to 
> put a few bits they needed to communicate into some otherwise unused hole in 
> an existing data structure so they could continue working.  Under those 
> conditions, it's no surprise that the high-order byte of practically every 
> address in every data structure in the entire system was stuffed full of bits, 
> and that IBM now has a major mess on their hands going to 32 bit addressing.  

Fred Brooks's book The Mythical Man-Month discusses these problems in
some detail.  (Brooks was one of the people at the top during the OS/360
development.)  You've got the outlines right, but the details are a
bit different.  One of the big mistakes was that core budgets for the
various modules were fixed before the detailed functionality was pinned
down.  So any time a programmer ran short, he looked for something he
could push out into user space, or into somebody else's module.  Or he
added an overlay, since there was no overlaying-rate budget; Brooks
mentions that there was a lot of unhappiness when the first OS/360
performance simulations came in.  It's probably true that some of these
things aggravated the use-those-spare-bits problem.

Ghod, I sound like I'm defending IBM...  They'll throw me out of the
Unix guru's union...  I guess what I'm saying is that one should not
assume that everyone at IBM was so stupid that they couldn't see these
things coming.  Most of these problems were anticipated to some degree.
Internal politics and short-term gain sometimes got in the way.  Even
when they didn't, nobody knew for sure how to run something like this,
so nobody recognized that some early management decisions were mistakes
with far-reaching consequences.  And the "army of ants" implementation
method sure didn't help.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry