[comp.arch] The CPU with 3 brains---486 compatibility with 8008

msp33327@uxa.cso.uiuc.edu (Michael S. Pereckas) (11/07/90)

In <2841@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:

>In article <PCG.90Nov5195229@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

>| Except for oddities like the 860 and 960, isn't the 486 still designed
>| to be almost binary compatible with the 8008? Sigh!

>  As far as I know the 80486 and 8008 are completely compatible at the
>level of the first two digits of the part number. The instruction set is
>100% diferent, so the only binary compatibility is that they both will
>read ones and zeros.

I don't know anything about the 8008, but I know that although the
8086 is not binary compatible with the 8080, it is similar enough
that porting assembly code is (I mean, was) supposed to be so easy
that it could be done automatically.  The 80486 is (of course) binary
compatible with the 8086.  This is why you can run Wordstar 3.0 on
your 486 box.  (That may well be better than whatever the latest
version is.)  (A few years ago, I tried (for laughs) running Wordstar
3.0 on a 20 MHz 386.  It was no faster than on a 4MHz Z80.  Amazing.)

:-)

--
Michael Pereckas               * InterNet: m-pereckas@uiuc.edu *
just another student...          (CI$: 72311,3246)
*Jargon Dept.: Decoupled Architecture--sounds like the aftermath of a tornado*

lhughes@b11.ingr.com (Lawrence Hughes) (11/07/90)

In article <1990Nov6.223738.13265@ux1.cso.uiuc.edu>, msp33327@uxa.cso.uiuc.edu (Michael S. Pereckas) writes:
> In <2841@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
> 
> >In article <PCG.90Nov5195229@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
> 
> >| Except for oddities like the 860 and 960, isn't the 486 still designed
> >| to be almost binary compatible with the 8008? Sigh!
> 
> >  As far as I know the 80486 and 8008 are completely compatible at the
> >level of the first two digits of the part number. The instruction set is
> >100% diferent, so the only binary compatibility is that they both will
> >read ones and zeros.
> 
> I don't know anything about the 8008, but I know that although the
> 8086 is not binary compatible with the 8080, it is similar enough
> that porting assembly code is (I mean, was) supposed to be so easy
> that it could be done automatically.  The 80486 is (of course) binary
> compatible with the 8086.  This is why you can run Wordstar 3.0 on
> your 486 box.  (That may well be better than whatever the latest
> version is.)  (A few years ago, I tried (for laughs) running Wordstar
> 3.0 on a 20 MHz 386.  It was no faster than on a 4MHz Z80.  Amazing.)

First off, the 8008 and 8080 were compatible only in the mind of Intel's
marketing folks - different register architecture and different binary opcodes.
It was possible to do "automatic" translation of source code from 8008 to
8080, but then again, you COULD do the same for PDP-11 to VAX... (about as
"compatible"). Of course, if you did this, you wound up with programs that
ran incredibly slow, since they did not take advantage of any of the 8080's
16 bit capabilities, etc. Fortunately, there was such a vanishingly small
amount of 8008 code ever written than this wasn't much of an issue.

The Z80 WAS upward compatible with the 8080, at both architecture and binary
opcode level - curiously enough, though, the bright boys at Zilog invented
a better (read "different and incompatible") symbolic assembly langauge
which made it much more difficult to take advantage of the Z80's extended
instructions: you either stuck with Intel mnemonics (and the 8080 subset) or
worked with Zilog's "preverted" ones (e.g. almost half the actual binary 
opcodes mapped onto the single symbolic opcode "LD"). Zilog's symbolic
language was SUCH a turkey, that several extended Intel symbolic assembly 
languages were developed (and widely used) leading to a regular tower of Babel.
Clever, Zilog.

When Intel introduced the 8086, they managed to make BOTH of these mistakes!
Again, compatibility existed mainly in the minds of the marketing department
(although there was somewhat of an upward compatibility in registers, they
didn't work the same way in many cases and entire classes of instructions
such as conditional calls and returns, and indirect addressing via some
registers, curiously disappeared on the way to the 8086). There was of course
NO support for the Z80 extensions to the 8080 (don't be SILLY). One of the
worst design decisions (still plaguing us today) is that *&^%$ segmented
addressing scheme. But the crowning glory of the 8086 introduction was one
of the silliest symbolic assembly languages ever introduced (ever hear of a
STRONGLY TYPED assembler?). I figure this was concocted by some PASCAL nuts
who had fried their brains on structured programming, or some such. Again,
several folks attemped to design and support saner assembly language, resulting
in another fine mess (was that written with the Microsoft/Intel or Digital
Research mnemonics?).

I actually had to translate a major piece of 8080 code (25,000 lines) to 8086,
and TRIED to use the automatic translation tools - what a pain. Would have
been simpler to translate to PDP-11. The resulting code (without extensive
hand optimization) really did run SLOWER on the original 4.77 MHz PC than it
did on my 4 MHz Z80. I finally did get the program translated and optimized
to a large extent, but you'll never convince ME the 8086 is "upward compatible"
with the 8080.

Fortunately, the 80186 (ever see that one outside of a dedicated controller?),
the 80286 (universally recognized as the first commercially successful brain
dead processor - waddya mean, I can't get back out of protected mode without
rebooting the processor???), the 80386 (the 8086 finally done almost right -
knew they would get the hang of it if they kept trying...) and the 80486
(really getting pretty slick - too bad it costs so friggin' much) are all
durn near really and truly upward compatible (for which they have been
soundly denounced by the forces of chaos everywhere, but not by much of 
anyone who actually has to WORK with these little buggers...).

As far as WORDSTAR on 8086, I really appreciated having a well thought out
and mature word processor on what was otherwise a truly poorly supported
system. Does anyone else remember the rumors that the original translation
of Wordstar to 8086 was done by a wigged out druggie who demanded payment in
cocaine? There were several times I seriously considered whether taking a
snootful might make the 8086 architecture and bizarre symbolic language make
more sense! With Wordstar 4.0, it really took advantage of the 8086 and was
quite fast, thank you. I still consider my 5.x versions to be the best
combined word processor / page layout product available. Beats the hell out
of using Ventura as a post processor (whadda knee-walking turkey).

Alas, Wordstar (not the "2000" gobbler, but the REAL one) may be one of the
last commercial products written (at least primarily) in assembly language.
Welcome to the wonderful world of multi-megabyte executables that barely fit
in 2 MB systems, barely crawl on 25 MHz CPUs and won't even run on any known
diskette system. Compliments of brothers Kernigan and Ritchie. Let's not even
talk about the marvelous POINT AND GRUNT interfaces that are being shoved
down our throats now... I used to run on a MAINFRAME with 300 other folks
that had less CPU/FPU/MEMORY/DISK resources than a system that can barely run
Windows.... 

Larry (in shaky voice: "When I was a lad - we barely had 64K bytes of
memory, and were GRATEFUL for it".... cough cough) Hughes
 

vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (11/08/90)

There may be an ancient INTEL source for the continuing statements about
8085/8080/8008/4004 and 8086 binary compatibility.

In the spring of 1978, I was convinced by INTEL rep's in Ohio that the 8086
would be binary compatible with the 8085, and that resulting software base
in things like RMX85 was one advantage of the 8086 compared to the 68000
and the Z8000.  I was somewhat confused by that conviction weeks later when
I got official chip documentation, and amused still later when I received
CONV86 (or was it CONV85?).  Fortunately, I didn't care when RMX86 finally
appeared a couple of years later.

It might have been a just communications error between the local rep and
the factory.  Then again, maybe it shows they're all the same.


Vernon Schryver,    vjs@sgi.com

peter@ficc.ferranti.com (Peter da Silva) (11/09/90)

In article <9333@b11.ingr.com> lhughes@b11.ingr.com (Lawrence Hughes) writes:
> Alas, Wordstar (not the "2000" gobbler, but the REAL one) may be one of the
> last commercial products written (at least primarily) in assembly language.

Hey, let's not blame the language for code bloat. The reason for code bloat
is *features*, *features*, *features*. Remember, the PDP-11 UNIX kernel,
written in C, was smaller than the EXE file for the latest Wordstar.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com 

vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (11/10/90)

In article <9333@b11.ingr.com>, lhughes@b11.ingr.com (Lawrence Hughes) writes:
> ...[long flame at INTEL, 8086's and ASM86]...

While the 8086 is far from wonderful in 1991, it was not bad in the late
1970's.  Some people preferred technical features of other machines, but
there were sound business reasons for many to choose the 8086.

I liked the typing of AMS86.  It made the irregularities of the instruction
set easier to stomach, allowing the assembler to make better guesses about
which of the zillion varients of opcodes you meant.  I conjecture that many
of the strong feels about the 8086 instruction set come from those who used
the Microsoft and similar assemblers.

It was nice knowing that when I typed `MOV AX,FOO`, the assember would
notice if I probably meant `MOV AL,FOO`.  ASM86 was no more strongly typed
than C, because it had 'casts' in the form of "type PTR FOO".  Just as in
C, if you wanted to abuse things, all you had to do was say so.  (The many
bugs in the PTR operator are irrelevant to the concept.)  No one today says
C is too strongly typed, and many are in favor of ANSI-C.

Among 100's of K of ASM86 code that were committed, at my hands or at my
direction in previous lives, was a translation of the 1979 vintage
Microsoft BASIC system, from its largely ASM85 source using AMS86
"codemacros." It was easy, effective, and fast to write a set of
codemacros, relying on the "typing" of ASM86 that did better than CONV86.
I never understood why INTEL wrote the strange CONV86 thing--well, I
understand, but don't sympathize.

Anyone who wanted an un-typed version of ASM86 could have easily whipped up
a set of permisive codemacros.


(sorry for the language babble in comp.arch).



Vernon Schryver,    vjs@sgi.com

seanf@sco.COM (Sean Fagan) (11/10/90)

In article <9333@b11.ingr.com> lhughes@b11.ingr.com (Lawrence Hughes) writes:
the real reason all software today is slow:

>Welcome to the wonderful world of multi-megabyte executables that barely fit
>in 2 MB systems, barely crawl on 25 MHz CPUs and won't even run on any known
>diskette system. Compliments of brothers Kernigan and Ritchie.
(It's Kernighan, btw.)

So, tell me, Larry, why are programs written in C faster than ones written
in assembly?

-- 
-----------------+
Sean Eric Fagan  | "*Never* knock on Death's door:  ring the bell and 
seanf@sco.COM    |   run away!  Death hates that!"
uunet!sco!seanf  |     -- Dr. Mike Stratford (Matt Frewer, "Doctor, Doctor")
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

pcg@cs.aber.ac.uk (Piercarlo Grandi) (11/10/90)

On 7 Nov 90 14:01:37 GMT, lhughes@b11.ingr.com (Lawrence Hughes) said:

lhughes> Welcome to the wonderful world of multi-megabyte executables
lhughes> that barely fit in 2 MB systems, barely crawl on 25 MHz CPUs
lhughes> and won't even run on any known diskette system. Compliments of
lhughes> brothers Kernigan and Ritchie.

Poor Kernighan and Ritchie must be turning in their graves -- oops
sorry, they are fortunately still with us :-). V7 Unix was designed to
run efficiently on a 64KB adress space machine. C used to generate
programs withing 10% of the space and speed of comaprable assembler. I
used to run a PDP-11/34 with 2.9BSD and five users doing development in
248KB, of which about 96KB were taken by the kernel (including the
buffer cache etc...).

The problem we have now is that we are running the same thing twenty
years later when a laptop has got vastly more power and a more
sophisticated architecture than a PDP-11/34, and that the people doing
development are simply not of the caliber, or at least of the good
taste, of Kernighan and Ritchie, in language, compiler, and OS
architecture.

lhughes> Larry (in shaky voice: "When I was a lad - we barely had 64K bytes of
lhughes> memory, and were GRATEFUL for it".... cough cough) Hughes

I also dote (as above) on when we had to make do with little. Naturally
one is happy that now we have by comparison plenty -- the disappointment
is on how badly this plenty is being exploited -- it is being used not
to build more useful sw, but more poorly written one.

Unfortunately it is more difficult to find a programmer than a megabyte
-- or, let me repeat myself, we have more good Japanese process
engineers than good Western programmers.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

lacey@cpsin3.cps.msu.edu (Mark M Lacey) (11/12/90)

In article <8658@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>
>In article <9333@b11.ingr.com> lhughes@b11.ingr.com (Lawrence Hughes) writes:
>the real reason all software today is slow:
>
>>Welcome to the wonderful world of multi-megabyte executables that barely fit
>>in 2 MB systems, barely crawl on 25 MHz CPUs and won't even run on any known
>>diskette system. Compliments of brothers Kernigan and Ritchie.
>(It's Kernighan, btw.)

In what way are they responsible for this?  Are you saying this has
something to do with the C language? (I can't find the original article
anywhere, so it is hard to tell).
>
>So, tell me, Larry, why are programs written in C faster than ones written
>in assembly?

Why do you people find that they must over generalize everything to make
a point?

Do you have proof that ALL programs writting in C are faster than ones
written is assembly?
--
Mark M. Lacey
(lacey@cpsin.cps.msu.edu)

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (11/12/90)

In article <8658@scolex.sco.COM>, seanf@sco.COM (Sean Fagan) writes:
>
>So, tell me, Larry, why are programs written in C faster than ones written
>in assembly?

Why is it faster to write programs in C than assembly? :-)

greg@tcnz2.tcnz.co.nz (Greg Calkin) (11/12/90)

On 7 Nov 90 14:01:37 GMT, lhughes@b11.ingr.com (Lawrence Hughes) said:
lhughes> Welcome to the wonderful world of multi-megabyte executables
lhughes> that barely fit in 2 MB systems, barely crawl on 25 MHz CPUs
lhughes> and won't even run on any known diskette system. Compliments of
lhughes> brothers Kernigan and Ritchie.

Is this a cheap shot at C or Unix ? Kernigan and Ritchie were both
involved in plan-9, which is a stunning simple, very small operating
system. They can hardly be blamed for the bloat in Unix. Some has
come from marketing, some from features added, including bits for 
BSD, etc. So many people have had their hands in the pie that it 
long since passed from K&R's sphere of control. They have criticised
the bloat too.

In article pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
pcg>Poor Kernighan and Ritchie must be turning in their graves -- oops
pcg>sorry, they are fortunately still with us :-). V7 Unix was designed to
pcg>run efficiently on a 64KB adress space machine. C used to generate
pcg>programs withing 10% of the space and speed of comaprable assembler. I
pcg>used to run a PDP-11/34 with 2.9BSD and five users doing development in
pcg>248KB, of which about 96KB were taken by the kernel (including the
pcg>buffer cache etc...).

Are there any K&R C compilers left ?
-- 
Greg Calkin, Systems Engineer and Dreamer                    (greg@tcnz.co.nz)
Thomas Cook N.Z. Limited, PO Box 24, Auckland CPO, New Zealand, Ph (09)-793920
Disclaimer : Would you buy a used car from someone with these opinions ?

darcy@druid.uucp (D'Arcy J.M. Cain) (11/12/90)

In article <8658@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
> [responding to claims that assembler is faster than C]
>So, tell me, Larry, why are programs written in C faster than ones written
>in assembly?
>
It is understandable why some people might think that assembler is necessarily
faster than C but this discussion reminds me of the idiot who once told me
that the new version of NewWord (a Wordstar clone) was faster than the old
one because the old one was written in C and the new one in compiled BASIC.
I assumed he meant the other way around but no amount of cross examining
could get him to reverse the statement.  Later this same person told me that
there was something wrong with my implementation of XModem because it didn't
work with his BASIC implementation.  Somehow I couldn't get exited about it.

BTW I'm sure NewWord was never written in BASIC anyway let alone a later
version but I may be wrong.

-- 
D'Arcy J.M. Cain (darcy@druid)     |
D'Arcy Cain Consulting             |   I support gun control.
West Hill, Ontario, Canada         |   Let's start with the government!
+ 416 281 6094                     |

aglew@crhc.uiuc.edu (Andy Glew) (11/13/90)

>Why is it faster to write programs in C than assembly? :-)

I'll regret getting into this, but empirical studies of programmer
productivity seem to show that programmers produce approximately the
same number of lines of code[*] per day, irrespective of the language
they program in. One C statement can be several assembly statements.

The actual efficients you use in your estimates of time required vary
a bit, but not too much.  I used COCOMO the last time I was doing
estimates. Is anybody still using COCOMO, or is it completely passe'?
Can anyone provide us with some empirical COCOMO parameters for
assembly versus C at some industrial site?

[*] debugged code, including comments, in languages the programmers
are familiar with.
--
Andy Glew, a-glew@uiuc.edu [get ph nameserver from uxc.cso.uiuc.edu:net/qi]

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (11/13/90)

In article <PCG.90Nov10145452@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>The problem we have now is that we are running the same thing twenty
>years later when a laptop has got vastly more power and a more
>sophisticated architecture than a PDP-11/34, and that the people doing
>development are simply not of the caliber, or at least of the good
>taste, of Kernighan and Ritchie, in language, compiler, and OS
>architecture.

Well these guys also didn't have a horde of marketing weeines telling
them the machine must:
    o) dice
    o) slice
    o) moosh
    o) squoosh
or else they wouldn't be able to sell it ! Most of the UNIX bloat can
be attributed to the "creeping featurism" that results when purchasing
decisions are based on oversimplified lists of system features.

It's not _that_ hard to come up with a NEW system that is simple and
elegant when you're tucked away in some lab with no one watching you
(K&R deserve kudos for designing one that was also portable and widely
usefull).  It's quite another thing to be working on an existing
product and tell your boss or whatever "Look, this system doesn't
handle concept A very well. In order to work with concept A I'm going
to have to redesign major portions of the system. Not to worry though,
the end result will be much simpler, much smaller and much more robust
than if I just tacked it on to the side. And it'll only take five
times as long initially !"  I don't know about y'all, but everyone
*I've* ever worked for would say "Tack it on to the side !" because
they know that those _buying_ the system _don't_ _care_ wether the
underlying implementation is elegant or not. They just want to see
"feature A" as a bullet item in some marketing spiel.

"Good taste" carries absolutely no weight in this industry. All you
can hope for is to sell your company on the fact that a well designed,
integrated solution will end up costing them far less in support later
down the road. Unfortunately the current software industry has the
unfathomable habit of chronically underestimating or simply
discounting support costs. Even if they do appreciate support hassles
they know that those problems will probably be a year in surfacing,
and if they don't ship this product NOW with "feature A" in it they
won't even be AROUND next year, so "feature A" get's glomed onto the
side.

Any simple, elegant system will quickly become a large, sticky mess of
poorly designed, redundant features when it is brought into widespread
industrial use. It's the nature of the beast. If you want simpler,
more elegant systems you'll need a market that makes its purchasing
decisions based on simplicty and elegance and NOT on gross features.

Gilbert Pilz Jr.  "I don't believe in nihilism, anarchy is too confining
gil@banyan.com     for me, I have no opinion about apathy." - g. panfile

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (11/14/90)

In article <PCG.90Nov10145452@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>I
>used to run a PDP-11/34 with 2.9BSD and five users doing development in
>248KB, of which about 96KB were taken by the kernel (including the
>buffer cache etc...).

But who were you developing for? Users who knew as much about the
computer as you did, or "unsophisticated" users? (I.e., users with
something better to do than bother themselves with the details of 
*your* job.)

If you target unsophisticated users, you have to write software that
spends most of its time:

1. explaining to the user what to do; and
2. analyzing the latest user error.

If the software has a certain level of capability, then it must also
have some minimum level of complexity. The only way to reduce the
computer's share of this complexity is to shift it somewhere else,
namely, onto the user's expertise, onto the telephone support
personnel, onto the printed manual (which explains what the program
is unable to explain for itself), onto the local guru, etc.

What good is an OS or application that fits in 6KB, if it requires a 
super-expert to run it, if a user spends more time on the telephone and
with the manual than getting results from the computer, and if it gives 
such illuminating error messages as "ERR xmbr843"? The code is not 
actually "small", then. It is just being subsidized by other essential 
systems. And those other systems are not getting cheaper every year.

Most useful software today is frighteningly hard to install, learn,
maintain, use, and REMEMBER. Most computer users therefore spend most 
of their time being confused, asking for help, looking things up,
or trying to remember how they solved problem XYZ six months ago
(which has now recurred). Not many computer systems or major 
applications are usable with less then a user's full-time attention. 
That's great if you can find someone who will pay you to do just one
thing for the rest of your life. But once jobs get that specialized,
they will be automated in the next six months. People only stay ahead
of machines by being more flexible, and that means constantly learning
and trying new things.

>The problem we have now is that we are running the same thing twenty
>years later when a laptop has got vastly more power and a more
>sophisticated architecture than a PDP-11/34, and that the people doing
>development are simply not of the caliber, or at least of the good
>taste, of Kernighan and Ritchie, in language, compiler, and OS
>architecture.

The PDP-11/34 was a very costly machine in its day, relative to the
average user's salary. Therefore it was good business to have the 
users log off for hours at a time and go look up the meaning of 
messages like "ERR xmbr843". The computer was so expensive that you
wanted to prevent anyone from using it until *after* they had done
all their thinking about the problem. 

Today, the laptop computer is so cheap that the user is losing money 
every time (s)he has to stop working and go hunt around for a printed 
manual. The laptop user also uses the computer for a vastly larger 
number of tasks than the PDP-11/34 could do. Since users have not 
gotten smarter in the last 20 years, they require their computers to
be smarter.


--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (11/14/90)

In article <1106@banyan.UUCP> gil@banyan.com writes:
>Any simple, elegant system will quickly become a large, sticky mess of
>poorly designed, redundant features when it is brought into widespread
>industrial use. It's the nature of the beast. If you want simpler,
>more elegant systems you'll need a market that makes its purchasing
>decisions based on simplicty and elegance and NOT on gross features.

If the people who make up that market had "simple and elegant" problems
to solve, no doubt they would be pleased with software that was
similarly "simple and elegant". The market consists of people who are
in business to make a profit. They base their purchasing decisions
on what their experience shows to yield the largest profit.
If they are making the wrong decision, then someone should be able
to go into business in competition with them, and prevail. 

Simplicity and elegance occur primarily in textbooks. The real world
has enormously complex problems to solve, under severe time constraints.
The market can't afford to ignore reality just to satisfy the aesthetic
sense of a cloistered, lazy programmer ("lazy" is not a pejorative term;
nobody wants to work harder than necessary, else we wouldn't demand
salaries). To sell into a real market, one must roll up one's sleeves 
and do a lot of real, detailed, and tedious work.

Experience has shown repeatedly that "simple" and "elegant" quickly 
become synonymous with "incapable". Look at the C language. It was 
designed initially to be small, but it is uncompetitive for most real 
problem-solving until you add hundreds of functions in libraries. C
can't stay "small" in practice and survive, else it would have. C
survives because it provides a set of base concepts to begin with,
an effective mechanism for expansion, and the ability for the customer
to select the expansion (s)he wants. In practice, the farther
this expansion goes, the more potentially valuable the system becomes.
The limiting factor is the programmer's ability to manage complexity.
 

--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

abm88@ecs.soton.ac.uk (Morley A.B.) (11/14/90)

In <6704@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:

>What good is an OS or application that fits in 6KB, if it requires a
>super-expert to run it, if a user spends more time on the telephone and
>with the manual than getting results from the computer, and if it gives
>such illuminating error messages as "ERR xmbr843"?

Quite right, but what really winds me up is an OS or application that
requires a super-expert ... manuals ...  cryptic error messages ... etc
yet still fills up a 100Meg hard disc and struggles to run in under 4Mb.
Sadly the computer I'm using now suffers from this. I'm not keen on UNIX
by the way. Contrast this with Sidekick for the PC which is small and
reasonably easy to use.

Andrew Morley, abm88@uk.ac.soton.ecs (JANET)

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (11/15/90)

In article <6706@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:
>If the people who make up that market had "simple and elegant" problems
>to solve, no doubt they would be pleased with software that was
>similarly "simple and elegant". The market consists of people who are
>in business to make a profit. They base their purchasing decisions
>on what their experience shows to yield the largest profit.
>If they are making the wrong decision, then someone should be able
>to go into business in competition with them, and prevail. 

Well no one operates in a vacumn. If I wanted to try and beat the
competition in "industry X" through the use of cheaper, smaller
systems that needed less maintenance, were easier to use and
understand (perhaps after some non-minimal startup education), and
continued to maintain their usefullness even when the nature of
"industry X" changed I would first need to _find_ such systems. I
would need to find employees who were willing to, in some limited
sense, "become their own programmers".

>Simplicity and elegance occur primarily in textbooks. The real world
>has enormously complex problems to solve, under severe time constraints.
>The market can't afford to ignore reality just to satisfy the aesthetic
>sense of a cloistered, lazy programmer ("lazy" is not a pejorative term;
>nobody wants to work harder than necessary, else we wouldn't demand
>salaries). To sell into a real market, one must roll up one's sleeves 
>and do a lot of real, detailed, and tedious work.

It's not just "aesthetic sense" it's "good sense". It's been shown
over and over again that the best overall approach to tackling a wide
range of complex problems is to provide a minimal set of powerfull,
simple primitives and an easy way to link these primitives together
into useable wholes. This is the real lesson of UNIX that seems to
have gotten lost in all the "yes, but does it support the FOO-remote-
file-munger ?"  Users who _understand_ the paradigms of the system
they are working with (NOT the low-level stuff mind you) and can play
with these paradigms will probably come up with much better solutions
to their _particular_ problems than Joe-the-isolated-programmer will
_ever_ be able to build into some monolithic, pop-down-windows, here's
the 3000 page user manual, application. But they won't be able to do
this as long as the systems we keep presenting them with have their
underlying paradigms obscured with 7 million conflicting "features".

>Experience has shown repeatedly that "simple" and "elegant" quickly
>become synonymous with "incapable". Look at the C language. It was
>designed initially to be small, but it is uncompetitive for most real
>problem-solving until you add hundreds of functions in libraries. C
>can't stay "small" in practice and survive, else it would have. C
>survives because it provides a set of base concepts to begin with,
>an effective mechanism for expansion, and the ability for the customer 
>to select the expansion (s)he wants. In practice, the farther 
>this expansion goes, the more potentially valuable the system becomes.

I think this arguement supports my position. I'm not necessarily
talking about "small" in the memory usage sense, but rather "small" in
the cognitive space sense.  The idea is not to try and force users to
work at a lower level for the sake of simplicity. The idea is to
provide them with a simplicity they can work with so they can spend
their time thinking about their real problems and not why "feature X"
works so much differently than "feature Y".

>The limiting factor is the programmer's ability to manage complexity.

It's our job to keep the complexity _down_ while increasing the power
of the underlying primitives.

Gilbert Pilz Jr.  "I don't believe in nihilism, anarchy is too confining
gil@banyan.com     for me, I have no opinion about apathy." - g. panfile

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (11/15/90)

In article <6706@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes:

| Simplicity and elegance occur primarily in textbooks. 

  And in early version of the UNIX(tm) operating system.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
    VMS is a text-only adventure game. If you win you can use unix.

em@dce.ie (Eamonn McManus) (11/16/90)

lhughes@b11.ingr.com (Lawrence Hughes) writes:
>The Z80 WAS upward compatible with the 8080, at both architecture and binary
>opcode level...

This isn't *quite* true.  The parity flag on the 8080 became the
parity/overflow flag on the Z80.  This meant that arithmetic instructions
which affect this flag as parity on the 8080 affect it as overflow on the
Z80.

I would not be surprised to discover that there are no programs in existence
that are affected by this change (unless they use it deliberately to see
which processor they are running on).  About the only arithmetic
instruction where you would be interested in the resulting parity is (in
Z80-speak) `adc a,a' which rotates left the accumulator and carry.

--
Eamonn McManus <em@dce.ie>

jmaynard@thesis1.hsch.utexas.edu (Jay Maynard) (11/16/90)

In article <vessel@dce.ie> em@dce.ie (Eamonn McManus) writes:
>lhughes@b11.ingr.com (Lawrence Hughes) writes:
>I would not be surprised to discover that there are no programs in existence
>that are affected by this change (unless they use it deliberately to see
>which processor they are running on).  About the only arithmetic
>instruction where you would be interested in the resulting parity is (in
>Z80-speak) `adc a,a' which rotates left the accumulator and carry.

There was one case. The first couple of versions of Altair BASIC, developed
for the MITS Altair 8800 by Bill Gates and later to become Microsoft BASIC,
was affected by this change, and had to be patched to run on a Z-80.
Interestingly enough, Gates could not officially release the patches, since
his contract with MITS specified that the interpreter was only to be sold
for Altair systems, and no Altair system ran a Z-80 (at least as far as MITS
was concerned). I don't know the instruction sequence they used that tickled
the problem.
-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL | Never ascribe to malice that which can
jmaynard@thesis1.hsch.utexas.edu  | adequately be explained by stupidity.
         "With design like this, who needs bugs?" - Boyd Roberts

rhealey@digibd.com (Rob Healey) (11/17/90)

In article <8658@scolex.sco.COM> seanf@sco.COM (Sean Fagan) writes:
>
>In article <9333@b11.ingr.com> lhughes@b11.ingr.com (Lawrence Hughes) writes:
>the real reason all software today is slow:
>>Welcome to the wonderful world of multi-megabyte executables that barely fit
>>in 2 MB systems, barely crawl on 25 MHz CPUs and won't even run on any known
>>diskette system. Compliments of brothers Kernigan and Ritchie.
>(It's Kernighan, btw.)
>
>So, tell me, Larry, why are programs written in C faster than ones written
>in assembly?
>

	Because few people know how to write "good" assembly code anymore...
	Why bother when you have a 2 Meg compiler to think for you?

		-Rob

#include <std/disclaimers.h>

Alkire@apple.com (Bob Alkire) (11/17/90)

>lhughes@b11.ingr.com (Lawrence Hughes) writes:
>>The Z80 WAS upward compatible with the 8080, at both architecture and 
binary
>>opcode level...
> em@dce.ie (Eamonn McManus) replies:
>This isn't *quite* true.  The parity flag on the 8080 became the
>parity/overflow flag on the Z80.  This meant that arithmetic instructions
>which affect this flag as parity on the 8080 affect it as overflow on the
>Z80.

>I would not be surprised to discover that there are no programs in 
existence
>that are affected by this change ...

Most CP/M-based programs that were written for the 8080 did run without 
difficulty on the Z80 with one well known exception: Microsoft Basic. 
There was a three instruction patch to fix it that I believe was written 
up in a very early issue of Dr. Dobbs. Bill Gates was using a jump on 
parity after a arithmetic instruction for some yet to be understood 
reason. 

Bob
_k
Monday    $ 30,510,000   $ 41,135,000
Tuesday   $ 39,233,000   $ 37,242,000
Wednesday $ 43,708,000   $ 37,715,000
Thursday  $ 37,056,000   $ 44,668,000
Friday    $ 39,941,000   $ 34,074,000
Total     $190,448,000   $194,834,000
        

herrickd@iccgcc.decnet.ab.com (11/19/90)

In article <11297@goofy.Apple.COM>, Alkire@apple.com (Bob Alkire) writes:
> Bob
> _k
> Monday    $ 30,510,000   $ 41,135,000
> Tuesday   $ 39,233,000   $ 37,242,000
> Wednesday $ 43,708,000   $ 37,715,000
> Thursday  $ 37,056,000   $ 44,668,000
> Friday    $ 39,941,000   $ 34,074,000
> Total     $190,448,000   $194,834,000
>         
This must have been an interesting week, Bob.

dan herrick

dtynan@unix386.Convergent.COM (Dermot Tynan) (11/29/90)

In article <794@ecs.soton.ac.uk>, abm88@ecs.soton.ac.uk (Morley A.B.) writes:
: 
: I'm not keen on UNIX
: by the way. Contrast this with Sidekick for the PC which is small and
: reasonably easy to use.

Ah yes, but can it fork?
		- Der
-- 
Dermot Tynan	dtynan@zorba.Tynan.COM
		{altos,apple,mips,pyramid}!zorba!dtynan

	"Five to one, baby, one in five.  No-one here gets out alive."