[comp.arch] R4000 "announcement"

jackk@shasta.Stanford.EDU (jackk) (02/08/91)

-----
I read in a recent EE Times article that there are no working chips
yet for the R4000. This type of pre-announcement is disturbingly
reminiscent of IBM's "pre-announcement" of the 360/95 before
there were even lab prototypes running. If I recall correctly,
some of their competitors took legal action against them. To this day,
such "pre-announcements" from IBM cause competitors to accuse them
of creating "fear, uncertainty, and doubt" in the marketplace to
freeze out competition, yet we hear no such accusations against
MIPS. Is it simply a matter of size ?

rcd@ico.isc.com (Dick Dunn) (02/08/91)

jackk@shasta.Stanford.EDU (jackk) writes:
> I read in a recent EE Times article that there are no working chips
> yet for the R4000. This type of pre-announcement is disturbingly
> reminiscent of IBM's "pre-announcement" of the 360/95 before
> there were even lab prototypes running...

Everything I've seen on the R4000 has been careful to point out that what
is being announced is *technology*, not a *product*.  I have seen no prices
for R4000's.  I have seen no promised delivery date for chips.

I don't see much similarity.

It is perhaps surprising that MIPS is willing to provide this much info
this early in the game.  (Perhaps they'd rather have the right info coming
from them than wrong guesses coming from their competitors?:-)

>...If I recall correctly,
> some of their competitors took legal action against them...

...because they pre-announced products, with price figures, which caused
problems for their competitors because people held off waiting for the
promised IBM product at the promised price instead of buying the real
products of the competitors.

Is this "pre-announcement"?  Is anyone really surprised to learn that MIPS
is working on a new chip, faster than the previous one???  (What other
possibilities are there?:-)

> [IBM] creating "fear, uncertainty, and doubt" in the marketplace to
> freeze out competition, yet we hear no such accusations against
> MIPS. Is it simply a matter of size ?

No, it's a matter of a completely different situation.  Study the details
a little more carefully before you get too upset.

With neither a price nor a delivery date, it's hard to see how MIPS has
announced anything to compete with existing products.  The only "FUD" I see
right now should be in the hearts of damnfool programmers who have given us
so much code assuming int==long==32-bits, and who are now seeing apoca-
lyptic visions sooner than they thought they might.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Don't lend your hand to raise no flag atop no ship of fools.

spot@CS.CMU.EDU (Scott Draves) (02/09/91)

In article <90@shasta.Stanford.EDU> jackk@shasta.Stanford.EDU (jackk) writes:

   I read in a recent EE Times article that there are no working chips
   yet for the R4000. This type of pre-announcement is disturbingly
   reminiscent of IBM's "pre-announcement" of the 360/95 before
   there were even lab prototypes running. 

MIPS has made it very clear that this is an announcement of an
architecture and not a product.


--

			IBM
Scott Draves		Intel
spot@cs.cmu.edu		Microsoft

cprice@mips.COM (Charlie Price) (02/11/91)

In article <90@shasta.Stanford.EDU> jackk@shasta.Stanford.EDU (jackk) writes:
>-----
>I read in a recent EE Times article that there are no working chips
>yet for the R4000. This type of pre-announcement is disturbingly
>reminiscent of IBM's "pre-announcement" of the 360/95 before
>there were even lab prototypes running. If I recall correctly,
>some of their competitors took legal action against them. To this day,
>such "pre-announcements" from IBM cause competitors to accuse them
>of creating "fear, uncertainty, and doubt" in the marketplace to
>freeze out competition, yet we hear no such accusations against
>MIPS. Is it simply a matter of size ?

This is certainly a valid question,
though the comparison to IBM seems a significant stretch to me.

We are talking about what the R4000 will be like.
We aren't saying when our partner's might be willing
to sell them or what they might choose to charge.
This is at least somewhat different than making a product
announcement that says samples will be available in N months
where N>6, for instance.

The R4000 had generated a lot of speculation and rumor
(so what else is new in the chip-design biz?)
and a lot of it was simply wrong.
I think that at least one of the reasons for saying something
before a product announcement was that people here thought it
was a very bad idea to let such incorrect rumors float around for long.
People were expecting *something*, but we aren't ready to make
a product announcement.

I agree that incomplete information of something that you can't
get your hands on, or get good "real measurements" for
is annoying to technical folks.
It is annoying to me to not be able to talk about it very much, too.
-- 
Charlie Price    cprice@mips.mips.com        (408) 720-1700
MIPS Computer Systems / 928 Arques Ave. / Sunnyvale, CA   94086-23650

mash@mips.COM (John Mashey) (02/12/91)

In article <1991Feb8.055009.9883@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
....
>With neither a price nor a delivery date, it's hard to see how MIPS has
>announced anything to compete with existing products.  The only "FUD" I see
>right now should be in the hearts of damnfool programmers who have given us
>so much code assuming int==long==32-bits, and who are now seeing apoca-
>lyptic visions sooner than they thought they might.

-Well, this looks as good a place as any to talk about this.
We'll be getting out a good guide for programmers sometime, in next
couple months, I hope.

It may be worth recounting some history here, as one seeks not
to repeat problems again.  (Every mistake in the computer industry
gets made at least 3 times: once by mainframe folks, once by minicomputer
folks, and at least once by microprocessor folks.  Sometimes supercomputer
folks have probably made the mistakes before also!  Also, it's always easy
for anybody (like me) to say "mistakes" with hindsight.)

Here are a few relevant quotes from "Computer Engineering: A DEC View
of Hardware Systems Design", Digital PRess, 1978, Bell, Mudge, McNamara.
I don't know if this book is still available, but it's a really excellent
one, with a lot of good history, but also, excellent insights into
technology trend curves that are still relevant today.
I especially like chapters 1, 2, 9-17.
From chapter 16 "The Evolution of the PDP-11", Bell & Mudge:
	"The biggest (and most common) mistake that can be made in a
computer design is that of not providing enough address bits for memory
addressing and management. ... For the PDP-11, the limited address problem
was solved for the short run, but not with enough finesse to support a
large family of minicomputers.  That was indeed a costly oversight. ...
it was realized that for some large applications there would soon be a bad
mismatch between the 64-Kbyte name space and 4-Mbyte memory space.  Thus, in
1974 architectural work began on extending the virtual address space of
the PDP-11...  This segmented address space ... was ill-suited to FORTRAN
and most other languages, which expect a linear address space...
Fortunately, the project was discontinued."

and, from chapter 17, "VAX-11/780: A Virtual Address Extension to the
DEC PDP-11 Family", Strecker:
	"For many purposes, the 65-Kbyte virtual address space typically
provided on minicomputers (such as the PDP-11) has not been and probably
will not continue to be a severe limitation.  However, there are some
applications whose programming is impractical in a 65-Kbyte virtual address
space, and perhaps more importantly, others whose programming is
appreciably simplified by having a large virtual address space."

At one point in time, C and UNIX really KNEW that an int, and a pointer
were 16-bits long.  Then, C got "long", to at least do 32-bit calculations,
and "int" became whatever was convenient.  Inside Bell Labs, before
UNIX got ported to other machines, C at least was available on various
32-bit architectures (like S/360).

As UNIX got ported to other 32-bit machines, all of us bad people who'd
figured int & char * were the same thing suffered, but everybody
learned pretty quickly not to assume this, and how to write code that
would work both on the 32-bitters (for future) and for 16-bitters
(for installed base). Also, besides the bulk of the existing programs,
which were, by definition runnable in the PDP-11's address space,
additional programs appeared where:
	a) You'd been chafing at the 16-bit addressing limit,
	because array sizes or something wanted to be bigger.
	b) You'd been chafing, because you'd had to restructure your
	application to break it up.
	c) You hadn't even considered running it on a PDP-11, and you'd
	been running it on bigger systems, but you really wanted it on UNIX,
	and so now you moved it over.
Of course, in the same time period, code also moved from PDP-11 & VAX to
3B's; people learned to parameterize for byte-ordering then,
or, outside BTL, when all of those early VAX->>> 68K ports happened.

By now, 3rd-party software is quite well-parameterized, although I have
some lingering fear that people have once again been assuming char *
and int are the same, as an awful lot of code is only on 32-bit machines.

Now, looking at history, let's note a few things:
There was probably more pain than there needed to be in
the transition from PDP-11 to 32-bit systems.  Still, it wasn't
TOO bad, for the reasons below:
	a) By the time it happened, most the needed code was aleady written
	in a high-level language.  This would have been much harder if the
	bulk of code had been assembly language.
	b) For a long time, many applications continued to be perfectly
	well runnable in the older environment, and in fact, many applications
	were able to use exactly the same source code in the old and new
	environments.
On the other hand:
	c) People might have been a little happier if it had been trivial
	to run PDP-11 UNIX binaries at full-speed on their VAXen,
	while recompiling only those programs that needed it.  Maybe this
	could have been done, or not.  
	d) People would have been happier if the compiler approaches
	had been more common amongst PDP-11 and VAXen.  Certainly, they
	were closer than some (for example: the 286 -> 386 transition),
	but there were enough differences that some strategies might want to
	be changed, not just low-level code.

Now, WHY 64-bit? and why now?
(The following is reasoningthat we used, with a WHOLE LOT of input from
some of our friends, some of whom said we'd be nuts not to go 64-bit,
if we could.  It probably cost us 5-10% of the die space, and some time.)

1) DRAM gets 4X bigger every 3 years.  There's every reason to expect
this to continue through, at least, the 16Mbit and 64Mbit generations.
People argue about 256Mb; I don't know enough to argue.

2) If you draw the curves, just of high-end Sun & MIPS servers,  of physical
memory offered, per year (horizontal axis), with a log-scale on vertical
axis of memory size, you find:
	a) It's a straight line, not surprising, as it just follows DRAM.
	b) It crunches into the 4GB range around 1993.
Maybe MIPS and Sun are nuts, but if so, they have at least the following
company: HP, SGI, IBM, none of whose numbers have as reliably over the
years, but they offer big machines, also.

3) Now, consider two rules of thumb:
	a) 4X: some real, non-lunatic-fringe programs will use 4X more
	virtual memory than they have physical memory (if the software
	allows for this sanely). In particular,
	file-mapping techniques burn virtual address space much faster than
	they use physical memory.  Hennessy claims I'm being too conservative
	with 4X, that it's bigger, but I'm conservative.
	b) .5X: few people buy a maxed-out memory system, because they have
	nowhere to go.  LOT's of people buy .25X or .5X memory systems,
	because memory is an effective way to solve many problems, and
	you tend to get 4X more of it every 3 years, at about same price.
These two rules of thumb give you a graph, with a band, that intersects
4GB around 1991 (leading edge) or 1994 (trailing edge, where LOTS of
people have .5X max memory systems).  The conclusion from this data
is that:
	a) Leading edge users of micros are already starting to run
	into the limits. (They are, by the way).
	b) By 1994, the issue will be fairly widespread (I'll say
	below what I mean by that, and to whom the issue is a problem.)

AND THAT MEANS:
1991: better have chips that do it, and appropriate advice to software
developers, because large ISV software takes a while to move (just as it
took a while to clean up the PDP-11 16-bit stuff, especially in the'
numerous applications floating around BTL: the OS was only a small chunk
of the effort. NEED chips, so that:
1992: (no later than): better have systems getting out, so people can
be debugging OS and finishing tests on compilers, debuggers, etc.
Applications developers who care can either be working cleanups into
their code, or developing new ones to take advantage of bigger addressing.
1993: One hopes that systems, with 64-bit compilers, tools, etc,
and at least some OS support, getting into application developers'
hands in reasonable numbers.  Maybe some even getting to users.
1994: better have serious applications in users' hands.

Now, some people claim that there's no way in the world you can do it
that FAST.  I claim that it's possible, although only just barely, i.e., all of
this is only just in time, if you believe my data on DRAM trends.

Now, this DOESN'T MEAN:
	a) That every PC-user is doomed if they don't rush out and get
	a 64-bit processor :-)
	b) That most applications won't happily stay 32-bit forever.

However, certain applications want to use more than 32-bit addressing,
regardless of the physical memory on the machine.  There are almost always
hacks to extend the physical addressing by a few more bits, and some of us
have endured them.  They mostly cause pain for OS folks, not applications
people.  Likewise, segmented addressing (although we don't believe in it),
can be a reasonable solution for some problems.  THe main issue is the
ease of programming, and people vary in their opinions.

Following are application areas likely to care about this issue.
1) DBMS: people use file-mapping more.  Older, big DBMS already wanted
more addressing space: consider 370/ESA for example, which has been around
for years, and has a less-than-pleasant-to-program mechanism to get
more bits.  Consider that a big 1990 SCSI disk is 1GB (30 bits), and
that people sell small deskside packages with 7GB already.
Consider that in a just a few years, addressing every byte on 1 SCSI disk
will overflow 32-bits...  Obviously, there are ALWAYS ways to get
around this with disk access; people have been addressing more for years;
however, the further away you get, the worse it gets with hackeries.

2) Video: uncompressed, 1280x1024 screen, 24-bit color, 24 frames/second
= 3.75MB, or 90 MB/second.  4GB = 45 seconds of video of this kind.

3) Document photographs & other high-quality images:
8 1/2 x 11" page, 24-bit color, 300dpi = 25 MB.
4GB = 160 pages. (likewise, uncompressed)

4) CAD environments: typically have big databases of complex-structured
objects.  Big-physical-memory servers keep the databases, and run monster
simulations that consume both virtual and physical memory.
WORKSTATIONS tend not to have such big memories, but want to rummage around
in the databases for random slices of them, hopefully using the same
software for sanity.  On the simulation side, I confidently
predict that there will be some ECAD folks here who'll want more than
4GB virtual space for some chip-verification thing, within next
few years.
ECAD gobbles space; I suspect MCAD is as least as bad.

5) Geographic Information Systems.  Like 4)

AND, OF COURSE:
6) Technical number-crunchers: have NEVER fit in ANY space :-)

Anyway, the bottom-line conclusions were that there were reasons
to get going on the support of this transition, because there was a
small, but rather important fraction of applications that cared about this;
if we didn't start now, we'd only have to redesign the fundamental
integer unit in the very next spin of the chip; and that it
takes a while for the software world to evolve.
We do think that 64-bit desktops are necessary, not just to get
software development to happen, but because certain kinds of end users
will need it.

NOBODY expects this means that one's word processor or paint program
is suddenly obsolete.  On the other hand, the kinds of applications
that strain 32-bits are not little ones.  If a simple programming
model is available, it may make the difference between an applications
being possible, and not.

Anyway, that's the reasoning, right or wrong.  An interesitng, and useful
technical debate might occur regarding 64-bit flat address generation versus
the various segmentation schemes that are currently found, both on
technical merits, and also, for the software writers' viewpoints.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

tbray@watsol.waterloo.edu (Tim Bray) (02/12/91)

mash@mips.COM (John Mashey) writes:
>a bunch of stuff about how we are running up against the 32-bit barrier.

He's right.  I'm one of those people (large text databases).  Also, I had
personal experience with the painful, segmented, lurch from 16 bits to 32.

Now a native 64-bit machine would make a lot of things we want to do a *lot*
simpler.  But nonetheless, I find myself thinking seriously that a segmented
approach might not be so bad.  Reason - each additional bit past 32 buys
you *so much*.  In fact, with just an 8-bit extension, you hit a terabyte, and
it goes up (fast!) from there.

Furthermore, I worry about burning unnecessary real memory by storing 64 bits,
of which >20 are wasted, for every pointer.

I personally think the 32->64 transition is qualitatively different from
the 16->32 transition, for just these reasons.

But I think it's extremely admirable that Mipsco has bit the bullet and
given it a serious shot.  Congrats, guys (assuming you get the chips working).

Cheers, Tim Bray, Open Text Systems

wayne@dsndata.uucp (Wayne Schlitt) (02/12/91)

In article <1991Feb8.055009.9883@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
> jackk@shasta.Stanford.EDU (jackk) writes:
> > I read in a recent EE Times article that there are no working chips
> > yet for the R4000. This type of pre-announcement is disturbingly
> > reminiscent of IBM's "pre-announcement" of the 360/95 before
> > there were even lab prototypes running...
> 
> [ ... ] 
> I don't see much similarity.
> 


it is, however, very similar to motorola's pre-announcement of the
'040....  i hope that mips has better luck delivering a real product
than motorola has had.  from what i have read in EE Times, there
appears to be several areas that mips is pushing the technology to the
limit, and if any one area cant quite be done they could run into real
problems.  the r4000 looks like a real hot chip, but then, the '040
looked real good too...


it's kind of sad...  i have been a loyal motorola follower for years,
but i think the delays in the '040 have kill the 68k line.  in my
book, the 68k is now a dyeing, but not yet dead line.  it is no longer
going to win new design ins, and the only major reason why people will
stick to the line is 'cause they have software that can only run on
the 68k.  sure there will still be systems with 68k's in them for at
least 5-10 years, and there will be new chips in the 68k line coming
out, but the market share of the 68k is going to do nothing but drop...


*sigh*.


i _really_ hope that mips isnt pre-announcing stuff for the same
reasons why motorola pre-announced the 68040...



-wayne

dscy@eng.cam.ac.uk (D.S.C.Yap) (02/12/91)

In article <45789@mips.mips.COM>, mash@mips.COM (John Mashey) writes:
> 
> AND, OF COURSE:
> 6) Technical number-crunchers: have NEVER fit in ANY space :-)
> 

Right, and they're never fast enough either :-(  Someone said that the
ANTICIPATED performance of systems based on this chip started around 50
mips.  That's nice, but I'd like someone to take a stab at an ANTICIPATED
mflop rating which interests me infinitely more.  Any takers?

Davin

PS:  Don't flame me for asking, I just like numbers - meaningless or otherwise.

          .oO tuohtiw esoht fo noitanigami eht ot gnihton evael Oo.
      Davin Yap, University Engineering Department, Cambridge, England
                       -->  dscy@eng.cam.ac.uk  <--
--
          .oO tuohtiw esoht fo noitanigami eht ot gnihton evael Oo.
      Davin Yap, University Engineering Department, Cambridge, England
                       -->  dscy@eng.cam.ac.uk  <--

rhealey@digibd.com (Rob Healey) (02/13/91)

In article <90@shasta.Stanford.EDU> jackk@shasta.Stanford.EDU (jackk) writes:
>I read in a recent EE Times article that there are no working chips
>yet for the R4000. This type of pre-announcement is disturbingly
>reminiscent of IBM's "pre-announcement" of the 360/95 before
>there were even lab prototypes running. If I recall correctly,
>some of their competitors took legal action against them. To this day,
>such "pre-announcements" from IBM cause competitors to accuse them
>of creating "fear, uncertainty, and doubt" in the marketplace to
>freeze out competition, yet we hear no such accusations against
>MIPS. Is it simply a matter of size ?

	Hmm, does this cover software simulators of said arch? If
	it exists in software can it be considered a working prototype?

	The excuse I saw was the MIPS was tired of competitors misrepresenting
	what the chip is/will be and decided to release the information
	now to clear up any misunderstandings. Take the excuse for
	what it's worth I guess. For all we know they might have silicon
	oscillating in a lab someplace, then again, they may not.

	My question here is:

	If an architecture exists only in software can it be considered
	a valid prototype of said arch? Does a physical prototype have
	to exist in this age of simulation? When does an arch. cease to
	be vaporware? Has anyone thought or looked into this musing question?

		Curiously,

		-Rob Healey

casey@gauss.llnl.gov (Casey Leedom) (02/13/91)

| From: mash@mips.COM (John Mashey)
| 
| At one point in time, C and UNIX really KNEW that an int, and a pointer
| were 16-bits long.  Then, C got "long", to at least do 32-bit
| calculations, and "int" became whatever was convenient.  Inside Bell
| Labs, before UNIX got ported to other machines, C at least was available
| on various 32-bit architectures (like S/360).
| 
| As UNIX got ported to other 32-bit machines, all of us bad people who'd
| figured int & char * were the same thing suffered, but everybody learned
| pretty quickly not to assume this, and how to write code that would work
| both on the 32-bitters (for future) and for 16-bitters (for installed
| base). ...

  I hate to disappoint you John, but the current installed code base is
*VERY* sloppy about short vs. int vs. long vs. char * vs. ...  When we
ported 4.3BSD to the PDP-11 one of the biggest problems was the
assumption that int == long in the BSD code.  I corrected literally
thousands of such errors.

  I don't think that the next move will be any easier.  Nor do I think
such a move will ever be easy until we throw away lint, an abomination
that should never have existed, and install its functions into the
compiler itself so programmers can never escape it.  This won't solve all
of our problems, but it will go a long way.  At least many modern
compilers are much more picky about what they'll accept.  The old V6
compiler would let you do:

	int i, j;

	i = j.foo;

Where foo is a field member of some structure.  Absolutely incredible.
And the V6 kernel was full of such coding.

Casey

tg@utstat.uucp (Tom Glinos) (02/14/91)

I was around BTL in the early 80's when the first transition issues were
solved. A lot of that expertise has been forgotten and will be to be
re-learned by the current set of programmers. My impression is that
the current population of programmers won't be able to do it in the time
frame you estimate.

>We do think that 64-bit desktops are necessary, not just to get
>software development to happen, but because certain kinds of end users
>will need it.

What kind of desktop application do you have in mind?

Have we as programmers given up in trying to develope code that
is small, fast, and solves specific problems? 

How are you going to effectively feed data into such a hugh address space.
Particularly at the desktop.

jsw@xhead.esd.sgi.com (Jeff Weinstein) (02/14/91)

In article <91206@lll-winken.LLNL.GOV>, casey@gauss.llnl.gov (Casey Leedom) writes:
>   I hate to disappoint you John, but the current installed code base is
> *VERY* sloppy about short vs. int vs. long vs. char * vs. ...  When we
> ported 4.3BSD to the PDP-11 one of the biggest problems was the
> assumption that int == long in the BSD code.  I corrected literally
> thousands of such errors.

  A few years ago I ported the X server to a 16 bit machine.  It is also
full of such assumptions.

	--Jeff

-- 
Jeff Weinstein - X Protocol Police
Silicon Graphics, Inc., Entry Systems Division, Window Systems
jsw@xhead.esd.sgi.com
Any opinions expressed above are mine, not sgi's.

mash@mips.COM (John Mashey) (02/21/91)

In article <91206@lll-winken.LLNL.GOV> casey@gauss.llnl.gov (Casey Leedom) writes:
>| From: mash@mips.COM (John Mashey)
>| At one point in time, C and UNIX really KNEW that an int, and a pointer
>| were 16-bits long.  Then, C got "long", to at least do 32-bit
>| calculations, and "int" became whatever was convenient.  Inside Bell
>| Labs, before UNIX got ported to other machines, C at least was available
>| on various 32-bit architectures (like S/360).

>| As UNIX got ported to other 32-bit machines, all of us bad people who'd
>| figured int & char * were the same thing suffered, but everybody learned
>| pretty quickly not to assume this, and how to write code that would work
>| both on the 32-bitters (for future) and for 16-bitters (for installed
>| base). ...
>  I hate to disappoint you John, but the current installed code base is
>*VERY* sloppy about short vs. int vs. long vs. char * vs. ...  When we
>ported 4.3BSD to the PDP-11 one of the biggest problems was the
>assumption that int == long in the BSD code.  I corrected literally
>thousands of such errors.
>  I don't think that the next move will be any easier.  Nor do I think
>such a move will ever be easy until we throw away lint, an abomination
>that should never have existed, and install its functions into the
>compiler itself so programmers can never escape it.  This won't solve all
>of our problems, but it will go a long way.  At least many modern
>compilers are much more picky about what they'll accept.  The old V6
>compiler would let you do:
>	int i, j;
>	i = j.foo;
>Where foo is a field member of some structure.  Absolutely incredible.
>And the V6 kernel was full of such coding.

I don't think we actually disagree, if you read carefully.

1) V6 was PDP-11-based, all of the way.
2) V7 had at least addressed the 16->32-bit issue, and also a byte-order
switch.
3) As it happens, most of this stuff REALLY got addressed, in practice,
at numerous places inside Bell Labs, as people had to get code that
ran on {PDP-11, VAX, 3B20, 3B2}.  Note that portability was a general
goal of V7 (research), and was pretty well-handled.
However, it was a more detailed, grind-it-out practice
of the various development groups at BTL, who had much more pressing
concerns of forward & backward compatibility, because they HAD to
support older machines in massive numbers in the field, something
that was (rightfully) not a necessary goal for Research.
Put another way: outside the Labs, you didn't necesarily see the
code where a lot of this attention was placed.

Anyway, my claim is:
	for a while, in late 70s, there were a fairly large number of people
	with immediate experience with making code portable across
	machines with different-sized pointers.
I also claim, and in fact, one of the reasons for talking about the 64-bit
stuff is:
	a) the prevalence of 32-bit machines has probably caused us to
	get sloppier than we were for a while.  In some sense, we may
	be partially "saved" by the existence of code for both 286 & 386 :-)
	b) Hence, some people may need a poke now, to allow time to get
	careful again.
	c) However, there are at least people in existence who do recall
	this (outside of the 286->386 domain), and many application
	developers doing new code are a lot cleaner about this stuff.
	1991 is NOT 1980, even thought sloppiness does remain.
	MANY more developers are much more aware of portability issues
	than in 1980, where it was OK if it ran on a VAX.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086