[comp.sys.ibm.pc] Why unix doesn't catch on

bright@Data-IO.COM (Walter Bright) (04/04/89)

In article <29177@bu-cs.BU.EDU> madd@buit4.bu.edu (Jim Frost) writes:
<The biggest problem with UNIX on the PC is there is no single version.
<Different versions came from different sources and perform (pardon my
<repetitiveness) differently.  I've used XENIX, plain System V, System
<V with Berkeley enhancements, and SunOS (Berkeley with enhancements,
<basically).  The latter performs the best of all of them (easily
<outperforming OS/2), but most of them outperformed OS/2 once any real
<load developed.

Unix suffers from two killer problems:
1. Lack of media compatibility.
	Silly unix vendors invariably invent a new file format. I haven't
	seen one yet where you could write a 5.25" disk under one
	vendor's unix and read it on another's.
2. Lack of binary compatibility.
	Can't compile on one unix and run on another, even if the
	hardware is the same. Source code compatibility simply isn't
	good enough.
The irritating thing is that these problems are easilly solvable, but
unix vendors suffer badly from nih.

At least the Mac can now read MSDOS disks, and funny thing, it's selling
much better! Maybe someday the unix people will wake up and discover
that compatiblity sells more than capability does.

An MS-DOS application developer can write and test a piece of code, and
can run it on 25 million DOS machines. Unix application developers have
to buy a dozen different machines, recompile and test on each, and stock
inventory on a dozen different permutations. This isn't practical for the
small developer, which is where most applications come from.

davidsen@steinmetz.ge.com (Wm. E. Davidsen Jr) (04/04/89)

In article <1922@dataio.Data-IO.COM> bright@dataio.Data-IO.COM (Walter Bright) writes:

| Unix suffers from two killer problems:
| 1. Lack of media compatibility.
| 	Silly unix vendors invariably invent a new file format. I haven't
| 	seen one yet where you could write a 5.25" disk under one
| 	vendor's unix and read it on another's.

  5-1/4" 360 and 1200k cpio (and tar) interchanges between xenix/286
(V7+SysIII) xenix/386 (SysV), Interractive, MicroPort, BellTech (all V.2
or V.3), PC/ix (SysIII), PC/RT (V.1), and probably a few I've forgotton.
QIC-24 DC600 tapes run on Xenix, MicroPort, PC/RT, Sun and Apollo at
least.

  MS-DOS floppies use tracks (35,40,77,80), sides(1,2), and
sectors(8,9,15,18) in interesting combinations, as well as a few vendors
who run sector sizes of 256,512, and 1024 bytes. Portability beyond most
being able to read 360k 5-1/4 is dubious, ask any software vendor.

| 2. Lack of binary compatibility.
| 	Can't compile on one unix and run on another, even if the
| 	hardware is the same. Source code compatibility simply isn't
| 	good enough.

  As far as I know all 386 versions of UNIX will currently cross
execute. Other platforms are not as well connected, perhaps.

| The irritating thing is that these problems are easilly solvable, but
| unix vendors suffer badly from nih.

  I will let some software vendors tell you how "easily solved" the
problems are.

You left out one killer problem: people who don't like UNIX spreading
old information, partial information, and outright B.S.
-- 
	bill davidsen		(wedu@crd.GE.COM)
  {uunet | philabs}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

karl@ddsw1.MCS.COM ([Karl Denninger]) (04/05/89)

>Response 12 of 12 (2282) by bright at Data on Tue 4 Apr 89 2:24
>[Walter Bright]
>Unix suffers from two killer problems:
>1. Lack of media compatibility.

   Well, perhaps.  This is lessening FAST.  This also depends on what you 
mean by compatibility as well.  I can't mount a disk file system from 
another machine some of the time, but I can _always_ write a disk volume on 
floppy with tar, cpio, or afio that can be read across the machines.

>2. Lack of binary compatibility.
>	Can't compile on one unix and run on another, even if the
>	hardware is the same. Source code compatibility simply isn't
>	good enough.

NONSENSE.  With the current releases I can compile on Xenix or UNIX (AT&T-
derived), and run it on the same platform under the different system 
versions unchanged.  This encompasses SCO Xenix, Interactive 386/Ix, 
Microport, AT&T, Bell Technologies, and ENIX (Everex).  That's all of 'em 
for the '386 system environment.  Across processors is another matter 
entirely, but that's not reasonable to expect -- after all, since when can 
you run a 68020 binary on anything other than a 68xxx series processor?  
Under ANY operating system?

Xenix will also run all '286, '186, and 8086 binaries compiled on previous 
versions of SCO (or other) Xenix's.  

No binary compatiblity?  Where have you been?

----
Karl Denninger (karl@ddsw1.MCS.COM, <well-connected>!ddsw1!karl)
Public Access Data Line: [+1 312 566-8911], Voice: [+1 312 566-8910]

palowoda@megatest.UUCP (Bob Palowoda) (04/05/89)

From article <1922@dataio.Data-IO.COM>, by bright@Data-IO.COM (Walter Bright):
> In article <29177@bu-cs.BU.EDU> madd@buit4.bu.edu (Jim Frost) writes:
> <The biggest problem with UNIX on the PC is there is no single version.
> <Different versions came from different sources and perform (pardon my
> <repetitiveness) differently.  I've used XENIX, plain System V, System
> <V with Berkeley enhancements, and SunOS (Berkeley with enhancements,
> <basically).  The latter performs the best of all of them (easily
> <outperforming OS/2), but most of them outperformed OS/2 once any real
> <load developed.
> 
> Unix suffers from two killer problems:
> 1. Lack of media compatibility.
> 	Silly unix vendors invariably invent a new file format. I haven't
> 	seen one yet where you could write a 5.25" disk under one
> 	vendor's unix and read it on another's.

     Gee, I take my SCO Xenix disks and read them on my friends ATT
     UNIX PC Clone machine with no file format changes. This also
     applies to ENIX, Interactive systems UNIX etc. And if I want 
     to move it to Sun's or BSD systems I just copy it from a PC
     Unix machine over the net. This problem dosn't kill me.


> 2. Lack of binary compatibility.
> 	Can't compile on one unix and run on another, even if the
> 	hardware is the same. Source code compatibility simply isn't
> 	good enough.
> The irritating thing is that these problems are easilly solvable, but
> unix vendors suffer badly from nih.

     Gee, I compile my Xenix application and it runs the binary on
     my friends ATT Clone PC Unix 3.2 also another friends Enix machine.
     The next Xenix release will allow my friend to run his binaries on
     my machine. 
> 
> At least the Mac can now read MSDOS disks, and funny thing, it's selling
> much better! Maybe someday the unix people will wake up and discover
> that compatiblity sells more than capability does.
> 
     Let's see you run the DOS program on the MAC processor. I can load
     and run DOS programs on my UNIX system with the same processor.

> An MS-DOS application developer can write and test a piece of code, and

  ^^
  At least you have the correct term here. Most application software 
  requires more than one programmer to work on the code. Unix has
  the ideal enviornment to manage the project much more effeciently.


> can run it on 25 million DOS machines. Unix application developers have
> to buy a dozen different machines, recompile and test on each, and stock
> inventory on a dozen different permutations. This isn't practical for the
> small developer, which is where most applications come from.



  Yes you are right there are some brands of UNIX that you would have
  to "buy" the hardware to get the software running. But you can't run
  OS2 on an XT. You may be right about the about most applications
  come from a small developer but why did MS price the OS2 development
  kit at 3,000.00? 
  I think there will be some good applications software for OS2 but
  it appears that it will take just as much time money and effort to
  do it on OS2 as it does on UNIX. Just about this time MS or IBM
  will offer OS3 with multiuser. Just kidding. I think customers
  buy what they need to get the job done they way they want it.
  I also think there going to want to see many different applications
  software run on either OS2 or UNIX before they put down there
  cash this time. So it looks like a good race. Who is in the lead?


  ---Bob



-- 
 Bob Palowoda                               
 Work: {sun,decwrl,pyramid}!megatest!palowoda                           
 Home: {sun}ys2!fiver!palowoda                
 BBS:  (415)796-3686 2400/1200   Voice:(415)745-7749                  

bill@bilver.UUCP (bill vermillion) (04/05/89)

In article <1922@dataio.Data-IO.COM> bright@dataio.Data-IO.COM (Walter Bright) writes:
}In article <29177@bu-cs.BU.EDU> madd@buit4.bu.edu (Jim Frost) writes:
}<The biggest problem with UNIX on the PC is there is no single version.
.....  
}Unix suffers from two killer problems:
}1. Lack of media compatibility.
}	Silly unix vendors invariably invent a new file format. I haven't
}	seen one yet where you could write a 5.25" disk under one
}	vendor's unix and read it on another's.
}2. Lack of binary compatibility.
}	Can't compile on one unix and run on another, even if the
}	hardware is the same. Source code compatibility simply isn't
}	good enough.

Current Unix version for the 386 from ATT & Interactive can read Xenix disks.
SCO will shortly be shipping their version that can read and write both ways.

Basically the 386 ports are compatible among mfrs.   You can take Xenix
applications and run them on Unix on some versions NOW.  
-- 
Bill Vermillion - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!bill
                      : bill@bilver.UUCP

john@jwt.UUCP (John Temples) (04/06/89)

In article <1922@dataio.Data-IO.COM> bright@dataio.Data-IO.COM (Walter Bright) writes:
>Unix suffers from two killer problems:
>1. Lack of media compatibility.
>	Silly unix vendors invariably invent a new file format. I haven't
>	seen one yet where you could write a 5.25" disk under one
>	vendor's unix and read it on another's.

Hmmm.  I use Microport System V/386, Interactive 386/ix, and Venix/286
regularly.  I have no media compatibility problems between them.  In fact,
I sometimes format diskettes on 286 Unix for use on 386 Unix because the 286
has a faster format program.

>2. Lack of binary compatibility.
>	Can't compile on one unix and run on another, even if the
>	hardware is the same. 

Hmmm.  On the above mentioned Unix versions, the two 386 versions have full
binary compatibilty.  And executables generated on the 286 version run just
fine on the 386 versions.  Obviously, you can't have binary compatibility
across all different hardware.  But with SVR3.2, the issue of Xenix binary
compatibility is resolved -- all SVR3.2 Unix versions will be binary
compatible on the 386.

>	Source code compatibility simply isn't good enough.

Why not?  What's the big deal about compiling the program being a part of the
installation procedure? 
-- 
John Temples - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!jwt!john

mark@motcsd.UUCP (Mark Jeghers) (04/07/89)

In article <[2282.13]karl@ddsw1.comp.ibmpc;1> karl@ddsw1.MCS.COM ([Karl Denninger]) writes:
>>Response 12 of 12 (2282) by bright at Data on Tue 4 Apr 89 2:24
>>[Walter Bright]
>>Unix suffers from two killer problems:
>>1. Lack of media compatibility.
>
>   Well, perhaps.  This is lessening FAST.  This also depends on what you 
>mean by compatibility as well.

"Lack of media compatability" in this case largely boils down to different
floppy formats.  It should be noted, however, that there is quite a LOT of
media compatability in the area of tape drives.  9 track open reel mag tapes
have an enourmous compatability base, and emerging standards for cartridge
tapes are getting there too, although perhaps to a lesser degree.


>>2. Lack of binary compatibility.

>Xenix will also run all '286, '186, and 8086 binaries compiled on previous 
>versions of SCO (or other) Xenix's.  
>
>No binary compatiblity?  Where have you been?

In addition, groups such as 88open are moving towards Binary Code Standards,
which will increase the domain of binary compatability in UNIX.



Mark Jeghers
Motorola Computer Systems

daveg@hpcvlx.HP.COM (Dave Guggisberg) (04/07/89)

Have others noticed this, or is it just me?

Every year we hear about how UNIX is going to take over the
world, next year.  And MS-DOS, OS/2 (fill in your favorite/hated OS)
will be dead.                

BUT we've been hearing this every year for the last 5 years.  I 
wonder what that means.  

Of course we've also been hearing, for the last 3 years, about
how the MAC will obsolete everything.

Now its NeXT.  But UNIX will obsolete them all NEXT YEAR, of course.

I wonder what this all means.  May be I should pay more attention
to the guy down town carrying the "World will end tomarrow" sign.

Dave Guggisberg

caromero@phoenix.Princeton.EDU (C. Antonio Romero) (04/08/89)

In article <253@jwt.UUCP> john@jwt.UUCP (John Temples) writes:
>In article <1922@dataio.Data-IO.COM> bright@dataio.Data-IO.COM (Walter Bright) writes:
>>	Source code compatibility simply isn't good enough.
>
>Why not?  What's the big deal about compiling the program being a part of the
>installation procedure? 

Well, for one thing, not all Unixes come with C compilers.  I suppose
one could require that they did, but this would swell the size of Unix.
Also, I see problems with people having enough disk space to handle all
the object files, libraries, etc.
Third, if you're a developer, do you want to be sending the source for
that really nifty proprietary whatever you're selling all over
creation, so your competitors can open it up and learn all your neat
tricks? And of course there's the bozo who makes just a few little
changes because he doesn't like this or that feature of the system, and
then complains when it breaks something else...

Distributing commercial applications as source just doesn't make any
sense.

But on the main issue of the discussion this came out of-- OS/2 vs.
Xenix/Unix/whoeverix:  
We all hear horror stories about how resource-intensive OS/2 is, and
how slow it gets when running several of anything, and how the
'one-at-a-time' compatibility box is a hack and not very useful.  But
what I wonder (and I _am_ a _very_ strong supporter of running Unix on
any possible platform, so this isn't a snipe) is, how much
memory/processor speed/etc. is required to run a similarly windowed
environment on a 386 Unix box?

I haven't heard any information about what it would take to create
functionality for a single user comparable to OS/2's with PM.  Is
anyone out there running, for example, X on a 386 and several
applications, with less memory than it takes to run OS/2 EE and a few
applications?  Some empirical data might help add some meaning to this
discussion-- both the inevtiable rigged benchmarks and some subjective
perceptions from people doing real work under each environment.

Also, consider that the 386 version of OS/2 (due out, if I remember,
sometime about a year from now, sure to be late, and probably buggy as hell
when it arrives) will probably do a lot to close the gap on the DOS
compatibility problem; and that the version of OS/2 out now might be
better compared to some of the better 286 Unix solutions, since OS/2 is a 286
operating system.  (I'll grant that the state of the art Unix has moved
past the 286 by now, but this difference explains a lot of the weakness
of OS/2, I think, when compared with the best Unix implementations.)

If the competition between OS/2 and Unix were to be handled purely on
its merits, I think:

	o Developers would rather have Unix as the target platform,
	  because it is well understood at this point, and relatively
	  mature on PC's, as well as relatively open-- one would no
	  longer be enslaved to the MS/IBM axis, but could acquire one's
	  386 operating system from whichever vendor catered best to one's
	  particular needs.  

	o Unix development would also offer the advantage that a vendor
	  could deliver the same software on, for example, 68000-family
	  platforms like Suns with relatively little work for the port,
	  while OS/2 being so tied to the 386, won't (probably) be made
	  available for other processor families.

	o Users would benefit by using Unix for the same reason-- one is
	  no longer married to MS/IBM for one's operating system, and
	  could even move outside the Intel 80x86-line of processors if
	  that best suited one's needs.

	o Users who wanted to connect to IBM mainframes might do better
	  early on with the OS/2 offerings, but that gap would close
	  fairly quickly as Unix OS developers quickly moved to cater to
	  their customers' needs.


I'm not dismissing OS/2 out of hand, since I think that with its
parentage it will inevitably be a major factor in the market-- it won't
just go away if we ignore it.  As I understand it, it actually handles
certain things (like shared libraries) better than most currently
available Unixes (though as I understand things, this gap is closing
too).  I do think, though, that it's an inferior solution to some
already-solved problems.  I'm still mystified at the 'single-user'
decision...  while it's certainly more comfortable to have one user per
machine, completely ruling out the possibility of running several seems
shortsighted.  


Anyway, I'm babbling.  
Anyone with real experience with doing real work in either OS/2 or some
386 Unix please chime in now with some war stories...

-Antonio Romero    romero@confidence.princeton.edu

john@jwt.UUCP (John Temples) (04/09/89)

In article <7632@phoenix.Princeton.EDU> C. Antonio Romero writes:
>Well, for one thing, not all Unixes come with C compilers.  I suppose
>one could require that they did, but this would swell the size of Unix.

True, but I can't imagine anyone having something as powerful as Unix without
having a C compiler to go with it.  It seems like a waste.

>Also, I see problems with people having enough disk space to handle all
>the object files, libraries, etc.

The object files and libraries can be deleted after you're done compiling.

>Third, if you're a developer, do you want to be sending the source for
>that really nifty proprietary whatever you're selling all over

Gimpel Software is making their PC-Lint product available in something called
"shrouded source".  It lets you compile the software on your target machine,
without being able to make sense of the source code.  I don't know how viable
this technique is, but it seems like it could have possibilities.

>We all hear horror stories about how resource-intensive OS/2 is, and

I don't know how resource intensive OS/2 is, but I doubt it's any more so than
386 Unix.  I would consider 4MB of RAM and an 80 MB drive as a _minimum_
configuration under which to run 386 Unix with DOS Merge as a single user
platform.  Yes, it will run in less, but not at what I would consider
acceptable performance.  But in this era of cheap hardware, I don't see minor
differences in resource requirements as being a basis on which to compare 
operating systems.  Trying to criticize an operating system like Unix or OS/2
because it requires more resources than a monitor program like DOS is silly.
I use Unix whenever I can, DOS as little as possible, and I've never touched
OS/2.  But I'm not going to say that Unix is "better" than the other two.  It
suits my needs, that's why I use it.  Everyone should use the OS that best
suits their needs.
-- 
John Temples - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!jwt!john

davidsen@steinmetz.ge.com (Wm. E. Davidsen Jr) (04/10/89)

In article <256@jwt.UUCP> john@jwt.UUCP (John Temples) writes:

| Gimpel Software is making their PC-Lint product available in something called
| "shrouded source".  It lets you compile the software on your target machine,
| without being able to make sense of the source code.  I don't know how viable
| this technique is, but it seems like it could have possibilities.

 Shrouded source is very old and popular. I have seen a lot of it on
BBS's and sometimes posted to USEnet. :-)
-- 
	bill davidsen		(wedu@crd.GE.COM)
  {uunet | philabs}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

pajerek@isctsse.UUCP (Don Pajerek) (04/11/89)

In article <101000047@hpcvlx.HP.COM> daveg@hpcvlx.HP.COM (Dave Guggisberg) writes:
>
>Have others noticed this, or is it just me?
>
>Every year we hear about how UNIX is going to take over the
>world, next year.  And MS-DOS, OS/2 (fill in your favorite/hated OS)
>will be dead.                
>
>BUT we've been hearing this every year for the last 5 years.  I 
>wonder what that means.  

People in large numbers won't switch to UNIX, or OS/2, or anything
else until 1) there is a compelling reason to do so; and/or 2) they
don't have to throw away there existing software investment.

Both the OS/2 and UNIX partisans believe that true multi-tasking
is such a compelling reason, but it's only recently that systems have
been available at a reasonable price that meet both conditions. I
refer naturally to the 386 machines that run merged DOS/UNIX environments
that allow users to multitask their DOS applications.

Previous multitasking approaches have failed in many ways. For example,
Windows will only multitask Windows applications; OS/2 will only multitask
OS/2 applications.  Besides, any multitasking at all on less than a 386
processor is impractical for performance reasons.

So predictions of UNIX dominance have been premature so far, but this
may change over the next couple of years.


Don Pajerek

jwright@atanasoff.cs.iastate.edu (Jim Wright) (04/12/89)

In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
| Both the OS/2 and UNIX partisans believe that true multi-tasking
| is such a compelling reason, but it's only recently that systems have
| been available at a reasonable price that meet both conditions.

...only recently that *INTEL* based systems...

| Previous multitasking approaches have failed in many ways.
                                  ^ with *INTEL* processors

| Besides, any multitasking at all on less than a 386
| processor is impractical for performance reasons.
           ^ (restricting discussion to *INTEL* processors)

A certain Motorola based system has been multi-tasking since the first
day it was released, and is imminiently affordable.

And remember boys and girls, Bill Gates says you can't multi-task
with less than 4MB of memory.  :-)

-- 
Jim Wright
jwright@atanasoff.cs.iastate.edu

pmjc@uhura.cc.rochester.edu (Pam Arnold) (04/12/89)

In article <978@atanasoff.cs.iastate.edu> jwright@atanasoff.cs.iastate.edu (Jim Wright) writes:
>In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
>| Both the OS/2 and UNIX partisans believe that true multi-tasking
>| is such a compelling reason, but it's only recently that systems have
>| been available at a reasonable price that meet both conditions.
>
>...only recently that *INTEL* based systems...
[and so on]

Jim, note the name of this newsgroup. All of the great things that have
been done with Motorola processors are completely irrelevant to the IBM PC
community.

>Jim Wright


Pam Arnold

jwright@atanasoff.cs.iastate.edu (Jim Wright) (04/12/89)

In article <1483@ur-cc.UUCP> pmjc@uhura.cc.rochester.edu (Pam Arnold) writes:
| Jim, note the name of this newsgroup. All of the great things that have
| been done with Motorola processors are completely irrelevant to the IBM PC
| community.
| 
| Pam Arnold

Not to the point of fooling yourself into thinking the universe revolves
around a single [processor, architecture, whatever].  The posting was
wrong, and I pointed it out.  You're right, not many people have PC clones
with anything but Intel [compatible] processors.  But an 80386 isn't some
magical chip that suddenly makes multi-tasking possible.  There are other
colors besides blue.

-- 
Jim Wright
jwright@atanasoff.cs.iastate.edu

bcw@rti.UUCP (Bruce Wright) (04/12/89)

In article <256@jwt.UUCP>, john@jwt.UUCP (John Temples) writes:
> In article <7632@phoenix.Princeton.EDU> C. Antonio Romero writes:
> >Well, for one thing, not all Unixes come with C compilers.  I suppose
> >one could require that they did, but this would swell the size of Unix.
> 
> True, but I can't imagine anyone having something as powerful as Unix without
> having a C compiler to go with it.  It seems like a waste.

I certainly can't imagine -my- wanting Unix without having a C compiler
to go with it.  But then we are both techies.  Many end-user type people
wouldn't know what a C compiler was if it hit them over the head - they
don't know or -want- to know how to write code (please, no flames, I know
not all end-users are like this, but many certainly are - I even know and
like some of them).  

The problem from the point of view of a software -vendor- is that many
Unix vendors -do not- sell their Unix with a C compiler (it is often an
added-cost option).  Also, even if the distribution contains a C compiler,
you can't count on being able to use it - the customer may have deleted
it, or not know how to run it (shell scripts don't work so well if the
compiler doesn't use all the switches your compiler did). 

Many people seem perfectly happy to run their machines without any kind
of programming language at all - or at most a Basic interpreter.  We may
find this a waste, but -they- don't - and until they do, the market is
unlikely to provide them with things they don't feel they need.

> Gimpel Software is making their PC-Lint product available in something called
> "shrouded source".  It lets you compile the software on your target machine,
> without being able to make sense of the source code.  I don't know how viable
> this technique is, but it seems like it could have possibilities.

Frankly, I'm skeptical.  It is probably possible to hide things from
nontechnical users this way, but they aren't the people that the software
developer in this scenario would be worried about.  At best it would slow
down a technical person - the question would then become whether it would
be cheaper to figure out how the program worked, or to duplicate it.  I
submit that the interesting question is not figuring out the program in
detail, but trying to identify any interesting "kernel" algorithms that
give the program much of its utility/ease-of-use/whatever.  Judging by
how many people have made quite a bit of sense out of machine code, I
would say that "shrouded source", being several levels of abstraction
higher (though still not as convenient as commented source), would make 
the task much easier.

This doesn't mean that every software company will make the same
choices - Gimpel Software, mentioned above, appears to have decided that
the technical developments they had done for their PC-lint program were
worth protecting a little, but not very much.  Obviously this is something
that each company has to address - whether to provide complete source, or
nothing but binaries.  It's not obvious to me that you can expect a one-
size-fits-all strategy (though as noted above, given the structure of the 
PC and Unix markets, I don't see how a developer can rely on the end-user
necessarily having a C compiler unless the "end-users" are C programmers!).

							Bruce C. Wright

caromero@phoenix.Princeton.EDU (C. Antonio Romero) (04/12/89)

In article <256@jwt.UUCP> john@jwt.UUCP (John Temples) writes:
>In article <7632@phoenix.Princeton.EDU> C. Antonio Romero writes:
>>Well, for one thing, not all Unixes come with C compilers.  I suppose
>>one could require that they did, but this would swell the size of Unix.
>True, but I can't imagine anyone having something as powerful as Unix without
>having a C compiler to go with it.  It seems like a waste.

Your average corporate type may object to having to pay for the C
compiler just so he can buy software; also, keep in mind that the juicy
market both sides in this battle will be going for is not a market of
developers-- therefore in principle (at least) the target customers
won't share your (or my) opinion that not having a good compiler around is
almost inconceivable...

Also, given the coming (or current, from what I've heard here)
binary compatibility among the flavors of 386 unix, this is a moot
point...

>>Third, if you're a developer, do you want to be sending the source for
>>that really nifty proprietary whatever you're selling all over
>
>Gimpel Software is making their PC-Lint product available in something called
>"shrouded source".  It lets you compile the software on your target machine,
>without being able to make sense of the source code.  I don't know how viable
>this technique is, but it seems like it could have possibilities.

An interesting idea.  I suppose they have some kind of encryption code
built into the compiler that takes something like crypt output and
decrypts it and feeds straight into the compiler... anything else 
(like some kind of tokenized representation for the code) might
prove too eay to crack.  Anyone know what they're realy doing?
(this would digress too far from the subject, so maybe split and
change the subject line if you respond to this one...)


>>We all hear horror stories about how resource-intensive OS/2 is, and

>I don't know how resource intensive OS/2 is, but I doubt it's any more so than
>386 Unix.  I would consider 4MB of RAM and an 80 MB drive as a _minimum_
>configuration under which to run 386 Unix with DOS Merge as a single user
>platform.  Yes, it will run in less, but not at what I would consider
>acceptable performance.  

That's a fair statement-- though I think that sets the memory
requirements considerably lower than OS/2 EE.  With some kind of
windowing environment, I guess they're about equal.

>But in this era of cheap hardware, I don't see minor
>differences in resource requirements as being a basis on which to compare 
>operating systems.  Trying to criticize an operating system like Unix or OS/2
>because it requires more resources than a monitor program like DOS is silly.

Well, I wasn't saying to compare it to DOS on these grounds-- DOS and
either of the others are simply too different for meaningful comparison.
I just wondered if Unix ran on 386 boxes with similar performance
on similar hardware.  Sounds like they probably are nearly equal.

>  Everyone should use the OS that best suits their needs.

Couldn't agree with you more.  

-Antonio Romero    romero@confidence.princeton.edu

lbr@holos0.UUCP (Len Reed) (04/12/89)

In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:

=Previous multitasking approaches have failed in many ways. For example,
=Windows will only multitask Windows applications; OS/2 will only multitask
=OS/2 applications.  Besides, any multitasking at all on less than a 386
=processor is impractical for performance reasons.

Nonsense.  SCO Xenix on a PC-AT type system (80286, 2 meg or so of memory)
works very nicely indeed.  This is not only a multitasking system, it
is a multiuser system.  Sure, it can be overloaded; any system can be.
Typical multiuser/multitasking operation for us:
	1) getting news over the modem at 2400 baud.
	2) one user reading news (rn).
	3) one user doing heavy-duty compiles.
	4) one user doing edits and an occasional compile.
	4) printing to a low-speed device.
	5) perhaps some other background processing.
-- 
    -    Len Reed

shapiro@rb-dc1.UUCP (Mike Shapiro) (04/12/89)

In article <2882@rti.UUCP> bcw@rti.UUCP (Bruce Wright) writes:
>In article <256@jwt.UUCP>, john@jwt.UUCP (John Temples) writes:
>> In article <7632@phoenix.Princeton.EDU> C. Antonio Romero writes:
>> >Well, for one thing, not all Unixes come with C compilers.  I suppose
>> >one could require that they did, but this would swell the size of Unix.
>> 
>> True, but I can't imagine anyone having something as powerful as Unix without
>> having a C compiler to go with it.  It seems like a waste.
     ...
>                  I don't see how a developer can rely on the end-user
>necessarily having a C compiler unless the "end-users" are C programmers!).
>

I think that this discussion thread really answers the question of the
topic.  Maybe "UNIX doesn't catch" on because too many UNIX pushers
don't think that potential UNIX users should be able to use UNIX
unless they are C programmers or what has been called "power users" or
whatever.  This is reflected in many user interfaces which are not as
easy to use as those under DOS or on the MAC.  Remember, though, that
the truly important users from a long-term success viewpoint are those
users who couldn't care less what the underlying system is, as long as
as they can use the application system (which includes the application
programs and underlying system) to get their jobs done as easily as
possible.  AND, they don't have to care what the underlying software
system is!



-- 

Michael Shapiro, Gould/General Systems Division (soon to be Encore)
15378 Avenue of Science, San Diego, CA 92128
(619)485-0910    UUCP: ...sdcsvax!ncr-sd!rb-dc1!shapiro

dedic@mcgill-vision.UUCP (David Dedic) (04/14/89)

Hi,
	As several people, notably Messrs. Denninger and Davidsen have pointed 
out Mr. Bright was incorrect in some of his statements. This however does
not mitigate his point that UNIX compatibility is required prior to general
acceptance of the OS.
	The advantage that MS-DOS and OS/2 have is that they are essentially
developed by a single organisation (Microsoft). Thus a "standard" is easy
to maintain, it is tyranically allocated from above. 
	I agree with Mr. Bright that small developers find that a more
comfortable milieu. I certainly do. 
	One question how did the various suppliers of PC-UNIX ever agree on 
system call management? Does this mean that two competing micro-software
vendors actually talked to each other outside a court room? Ubelievable ;-}
	
	In answer to Mr. Bright, UNIX is tending towards higher compatibility
this is the goal of the OSF and Archer Group/Sun efforts. Furthermore
the GNU/Copyleft effort is pushing for global source distribution...i.e
no more binary distributionswhich may or may not work. If it does not work 
and you still need it then you can hire a hacker to fix it.

	All of these forces are out in the UNIX world and their results
bear watching.


	The above are my own opinions and should not be construed 
as neither policies nor opinions of my employer(s).

						Dave Dedic

toma@tekgvs.LABS.TEK.COM (Tom Almy) (04/14/89)

In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
>Previous multitasking approaches have failed in many ways. For example,
>Windows will only multitask Windows applications; OS/2 will only multitask
>OS/2 applications.  Besides, any multitasking at all on less than a 386
>processor is impractical for performance reasons.

Rediculous!   There have been commercial multitasking/multiuer systems
around for years on machines less powerful.  PDP-8s had TSE which supported
16 users on a machine certainly less than 1/10th the speed.

I wrote a successful multitasking application on a Z80, and even implemented
a multiuser multitasking (multiple tasks per user) Forth environment
on a Z80 (3 users).

If you need at least a 386 for multitasking, your tasks are too big!

Tom Almy
toma@tekgvs.labs.tek.com
Standard Disclaimers Apply

pajerek@isctsse.UUCP (Don Pajerek) (04/14/89)

In article <983@atanasoff.cs.iastate.edu> jwright@atanasoff.cs.iastate.edu (Jim Wright) writes:
>In article <1483@ur-cc.UUCP> pmjc@uhura.cc.rochester.edu (Pam Arnold) writes:
>| Jim, note the name of this newsgroup. All of the great things that have
>| been done with Motorola processors are completely irrelevant to the IBM PC
>| community.
>| 
>| Pam Arnold
>
>Not to the point of fooling yourself into thinking the universe revolves
>around a single [processor, architecture, whatever].
>
>-- 
>Jim Wright


I think the point of this discussion is the success or lack thereof of
UNIX on IBM-PC type machines.  My original posting simply pointed out
that predictions of UNIX achieving wide use among PC users have been
premature because until now (i.e., 80386) users have not had a multi-tasking
solution that provided both adequate performance and complete upward
compatibility for their existing DOS applications.

As long as these are the criteria, the universe DOES revolve around a
single processor (processor family, actually).  I am not questioning
the merits of the Motorola processors.  I do maintain, however, that
developments in the Motorola world don't bring PC users one bit
closer to having a viable UNIX environment on their PC's.


Don Pajerek

pajerek@isctsse.UUCP (Don Pajerek) (04/14/89)

In article <2038@holos0.UUCP> lbr@holos0.UUCP (Len Reed) writes:
>In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
>=Previous multitasking approaches have failed in many ways. For example,
>=Windows will only multitask Windows applications; OS/2 will only multitask
>=OS/2 applications.  Besides, any multitasking at all on less than a 386
>=processor is impractical for performance reasons.
>
>Nonsense.  SCO Xenix on a PC-AT type system (80286, 2 meg or so of memory)
>works very nicely indeed.


Will Xenix on a 286 machine multitask DOS applications?


>    -    Len Reed




Don Pajerek

john@jwt.UUCP (John Temples) (04/14/89)

In article <7697> caromero@phoenix.Princeton.EDU (C. Antonio Romero) writes:
>also, keep in mind that the juicy
>market both sides in this battle will be going for is not a market of
>developers-- therefore in principle (at least) the target customers
>won't share your (or my) opinion that not having a good compiler around is
>almost inconceivable...

Does this mean that Unix is doomed to be used only by techies like us? I don't
consider myself a "user"; I use Unix because it's fun to hack on.  I suppose
most "users" are interested in running things like Lotus 1-2-3, dBase, and
WordPerfect -- none of which interest me.  Without a very rigidly defined
standard, I don't see how such applications can be supported under Unix.  A
couple of very recent experiences have shaken my confidence in the "386 Unix
binary standard."  WordPerfect 4.2 for Microport Unix won't run correctly on
386/ix.  It appears as though it may just be due to differences in the console
driver -- but I guess a binary standard doesn't guarantee a device driver
standard.  More disturbing was finding that ksh for 386/ix wouldn't run on
Microport.  Something like a shell doesn't rely on console drivers, so I would
expect the binary standard to work.  ksh wouldn't even recognize internal
commands like "cd". 
 
Now that Microport appears to be defunct, I wonder how WordPerfect Corp. feels
about entering the 386 Unix market.  Will this frighten off other prospective
Unix developers like Lotus?  I suppose Unix will survive through it all, but
can it succeed commercially?
-- 
John Temples - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!jwt!john

stever@tree.UUCP (Steve Rudek) (04/15/89)

In SOME article SOMEBODY WRITES:
> >>We all hear horror stories about how resource-intensive OS/2 is, and

I don't normally frequent this newsgroup so I probably won't be around for
any followups--but I couldn't let this pass without speaking out on something
that's been bugging me for some time. 

I would LIKE for UNIX to win out against OS/2.  I feel that UNIX is at a
tremendous disadvantage, though, for a reason that I've never seen
discussed:  UNIX programmers and companies which are selling to the UNIX
marketplace are so caught up in keeping their products easily portable between
different UNIX machines that they generally make little or no effort to
optimize their code via assembly.  They write everything in C and a well
written 100% C program generally can't hold a candle to a well written
assembly program--the compilers just aren't that good and they never will
get that good since the UNIX compilers themselves are caught up in the
portability trap.

Portability is the enemy of excellence.

terry@eecea.eece.ksu.edu (Terry Hull) (04/16/89)

In article <258@jwt.UUCP> john@jwt.UUCP (John Temples) writes:
> 
>Now that Microport appears to be defunct, I wonder how WordPerfect Corp. feels
>about entering the 386 Unix market.  Will this frighten off other prospective
>Unix developers like Lotus?  I suppose Unix will survive through it all, but
>can it succeed commercially?  

A couple of things here.  Number one, I expect that companies like Sun
and SCO were the ones that pushed WPC into doing a UNIX port of
WP 4.2.  I think WPC made it available for uPort because it did
not cost them too much extra.  Does anyone know how the installed base
of SCO, Sun, and uPort compare?  BTW:  I have a client using WP 4.2
and when I called WPC tech support I got correct answers fast.  What a
change from some of the other software vendors I call.  (And, don't
forget WP support is FREE.)  


Secondly, I think that major software companies like Lotus will look
more at the incredible growth of SCO than they will at the failure of
uPort.  



-- 
Terry Hull 
Department of Electrical and Computer Engineering, Kansas State University
Work:  terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!terry
Play:  tah386!terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!tah386!terry

bcw@rti.UUCP (Bruce Wright) (04/16/89)

In article <4937@tekgvs.LABS.TEK.COM>, toma@tekgvs.LABS.TEK.COM (Tom Almy) writes:
> In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
> >Previous multitasking approaches have failed in many ways. For example,
> >Windows will only multitask Windows applications; OS/2 will only multitask
> >OS/2 applications.  Besides, any multitasking at all on less than a 386
> >processor is impractical for performance reasons.
> 
> Rediculous!   There have been commercial multitasking/multiuer systems
> around for years on machines less powerful.  PDP-8s had TSE which supported
> 16 users on a machine certainly less than 1/10th the speed.

The old multitasking operating systems for things like the PDP-8
and so forth required application programs which cooperated with the
operating system.  The problem with multitasking on the IBM-PC is not
that you need any kind of great engine to run some sort of multitasking
(there are a number of multitasking operating systems for a stock IBM-PC),
it's that in order to run MS-DOS software under such a system you need a
fairly powerful engine.

The reason for this is simply that the typical applications availble for
MS-DOS spend a fair amount of energy to subvert the operating system.
This unfortunate state of affairs exists because of the abominable
performance that MS-DOS has for certain operations (screen operations
generally, also to a somewhat lesser extent for disk and keyboard
operations), and also because it lacks useful functions (many devices
are very poorly supported, especially the communications devices).
The upshot is that the writer of a multitasking operating system is
left with three unappetizing choices:

	o Ignore the possibility that the application will do any of
	  the ill-behaved things that MS-DOS applications programs
	  always seem to want to do.  This will almost guarantee that
	  either the application task or the operating system will
	  crash when the application attempts the operation (depending
	  on the nature of the operation and the design of the operating
	  system).  This will make most existing MS-DOS applications
	  unusable on the system, or at least will require disabling
	  multitasking while the application is running.

	o Write an O/S for a processor which allows a full virtual MS-DOS
	  machine to be created, with appropriate emulation when attempts
	  are made to reference video memory or other devices.  This
	  requires at least a 386 - the 286 does not allow the creation
	  of a complete virtual machine environment.

	o Complete emulation of the processor instruction set (in other
	  words, interpretively execute the entire image).  This will
	  work, but at an enormous (and generally unacceptable) hit in 
	  performance.

In a real way, the problem is NOT the 8086 or the 286, it's MessyDos.

> If you need at least a 386 for multitasking, your tasks are too big!

Reminds me of a comment attributed to John von Neumann to the effect that
if you couldn't write it in 4K, it wasn't worth writing (of course he
was talking about 4K of 36-bit words, but even so ...).  (I don't remember
the exact size or the exact wording and the attribution itself may be 
apocryphal - anyone know?).  Certainly there are some computationally-
intensive problems which would not run very effectively on anything less 
than a 386, but you're certainly right that for basic multitasking, you
really don't need a 386 or even a 286.

Unfortunately what the market wants isn't just multitasking on a 80*86
machine, it's a multitasking MessyDos environment.  Sigh.

						Bruce C. Wright

pechter@scr1.UUCP (04/16/89)

The Xenix os (if AT&T won't let'em call it Unix I won't either) works fairly
well on even 8086 machines like my AT&T 6300.

The problem with MS-DOS it it's CP/M on STEROIDS.  Bigger, bulkier but really
not too much better...


-- 
Bill Pechter -- Home - 103 Governors Road, Lakewood, NJ 08701 (201)370-0709
Work -- Concurrent Computer Corp., 2 Crescent Pl, MS 172, Oceanport,NJ 07757 
Phone -- (201)870-4780    Usenet  . . .  rutgers!pedsga!tsdiag!scr1!pechter
**    Why is it called software when it's so hard to install. RTF what?      **

allbery@ncoast.ORG (Brandon S. Allbery) (04/17/89)

As quoted from <256@jwt.UUCP> by john@jwt.UUCP (John Temples):
+---------------
| In article <7632@phoenix.Princeton.EDU> C. Antonio Romero writes:
| >Well, for one thing, not all Unixes come with C compilers.  I suppose
| >one could require that they did, but this would swell the size of Unix.
| 
| True, but I can't imagine anyone having something as powerful as Unix without
| having a C compiler to go with it.  It seems like a waste.
+---------------

Tell that to our clients.  It *IS* annoying, but also common.

+---------------
| Gimpel Software is making their PC-Lint product available in something called
| "shrouded source".  It lets you compile the software on your target machine,
| without being able to make sense of the source code.  I don't know how viable
| this technique is, but it seems like it could have possibilities.
+---------------

The OSF recently released an RFT for this kind of technology, the idea being
that one could shrink-wrap software to run on ANY OSF/1 host:  the software
is supplied as machine-independent pseudo-code, and a special compiler
supplied with OSF/1 will compile it into native code.  (Personal opinion:
it's gonna have to be a D*MN good compiler to be able to produce code that
runs reasonably well from low-level code that can't assume anything about
the processor environment.)

+---------------
| >We all hear horror stories about how resource-intensive OS/2 is, and
| 
| I don't know how resource intensive OS/2 is, but I doubt it's any more so than
| 386 Unix.  I would consider 4MB of RAM and an 80 MB drive as a _minimum_
| configuration under which to run 386 Unix with DOS Merge as a single user
+---------------

The key words being "DOS Merge".  I have a 2MB AT386 with Oracle on it in my
apartment at the moment (borrowed from the office); it runs fine.  If I want
to use DOS, I'll use my own machine.  And with more DOS programs coming out
for use under native Unix all the time, VP/ix or DOS-Merge looks less and
less interesting all the time.  (It has a 60MB RLL drive, if you care.)  But
it takes 4MB to run OS/2 plus SQL Server, from what I hear.

++Brandon
-- 
Brandon S. Allbery, moderator of comp.sources.misc	     allbery@ncoast.org
uunet!hal.cwru.edu!ncoast!allbery		    ncoast!allbery@hal.cwru.edu
      Send comp.sources.misc submissions to comp-sources-misc@<backbone>
NCoast Public Access UN*X - (216) 781-6201, 300/1200/2400 baud, login: makeuser

smvorkoetter@watmum.waterloo.edu (Stefan M. Vorkoetter) (04/18/89)

In article <268@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>Portability is the enemy of excellence.

Bull!

ddb@ns.network.com (David Dyer-Bennet) (04/19/89)

In article <2890@rti.UUCP> bcw@rti.UUCP (Bruce Wright) writes:
:In article <4937@tekgvs.LABS.TEK.COM>, toma@tekgvs.LABS.TEK.COM (Tom Almy) writes:
:> In article <199@isctsse.UUCP> pajerek@isctsse.UUCP (Donald Pajerek) writes:
:> Rediculous!   There have been commercial multitasking/multiuer systems
:> around for years on machines less powerful.  PDP-8s had TSE which supported
:> 16 users on a machine certainly less than 1/10th the speed.
:
:The old multitasking operating systems for things like the PDP-8
:and so forth required application programs which cooperated with the
:operating system.

Ummm, this turns out not to be the case.  TSS-8 on a PDP-8I had hardware
protection against the timeshared tasks doing anything that would damage
the rest of the system.  Possibly that was through a required hardware
add-on to the basic 8/I, I wasn't involved in ordering and configuration
management for that system.
-- 
David Dyer-Bennet, ddb@terrabit.fidonet.org, or ddb@ns.network.com
or ddb@Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
or ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
or Fidonet 1:282/341.0, (612) 721-8967 9600hst/2400/1200/300

terry@eecea.eece.ksu.edu (Terry Hull) (04/19/89)

In article <9286@watcgl.waterloo.edu> smvorkoetter@watmum.waterloo.edu (Stefan M. Vorkoetter) writes:
>In article <268@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>>Portability is the enemy of excellence.
>
>Bull!

I agree.  I do not think that protability is the enemy of excellence.
On the other hand, writing inefficient code in a high-level language
will not produce a good spreadsheet to run on an 8088.  You might have
a shot at getting an acceptable product if you did some careful
analysis of the program and optimized appropriate sections.  You may
need to write a routine in assembly, or you may just need to change
the algorighm.  


-- 
Terry Hull 
Department of Electrical and Computer Engineering, Kansas State University
Work:  terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!terry
Play:  tah386!terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!tah386!terry

ddb@ns.network.com (David Dyer-Bennet) (04/21/89)

In article <2899@rti.UUCP> bcw@rti.UUCP (Bruce Wright) writes:
:In article <1292@ns.network.com>, ddb@ns.network.com (David Dyer-Bennet) writes:
:> In article <2890@rti.UUCP> bcw@rti.UUCP (Bruce Wright) writes:
:> :
:> :The old multitasking operating systems for things like the PDP-8
:> :and so forth required application programs which cooperated with the
:> :operating system.
:> 
:> Ummm, this turns out not to be the case.  TSS-8 on a PDP-8I had hardware
:> protection against the timeshared tasks doing anything that would damage
:> the rest of the system.
:
:Although I was (partly) thinking about operating systems which had no
:hardware protection, I really was addressing a completely different
:problem:  None of these small-machine operating systems (with or without 
:hardware protection) provided ANY kind of emulation for programs which
:insisted on making direct hardware I/O calls.  All of them that I am
:aware of would simply terminate the offending program (if they could
:and if they got an appropriate interrupt to find out about it).

Still not completely true.  The "test for character available" and "read
character" (from tty) hardware instructions were trapped and emulated by
TSS-8, as were most of the other character-at-a-time device instructions.
In fact TSS-8 was smart enough to recognize the busy wait loop and stop
running it (providing its own non-busy wait).  There were TSS-8 native
sys calls that avoided this, but these emulations were provided for
compatibility with previous standalone programs.

Now, I think this is pretty much a side-issue, and doesn't affect the
validitiy of your basic argument, so I'll shut up now.  But TSS-8 was
the first timesharing system I ever saw, and I'm STILL impressed with
some of the things it managed to do on a system with 32k memory if
fully configured.  

:The unfortunate situation that the PC world has backed itself into is
:one that requires that any environment that provides "PC emulation" must
:also emulate the HARDWARE environment (with all its quirks ....) of a
:real PC.  It's not enough to just provide DOS or even BIOS compatibility.
:From a software design point of view, this is an almost unspeakable
:abomination, one that we will regret for many years.

Yes, exactly, I agree, precisely, good point, definitely.  Of course, they
*did* get some pretty zippy Lotus performance on floppy-based systems on
the plus side; but like most contracts with the devil, it's probably not
worth it in the long run.
-- 
David Dyer-Bennet, ddb@terrabit.fidonet.org, or ddb@ns.network.com
or ddb@Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
or ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
or Fidonet 1:282/341.0, (612) 721-8967 9600hst/2400/1200/300

bcw@rti.UUCP (Bruce Wright) (04/21/89)

In article <1308@ns.network.com>, ddb@ns.network.com (David Dyer-Bennet) writes:
> The "test for character available" and "read
> character" (from tty) hardware instructions were trapped and emulated by
> TSS-8, as were most of the other character-at-a-time device instructions.
> In fact TSS-8 was smart enough to recognize the busy wait loop and stop
> running it (providing its own non-busy wait).  There were TSS-8 native
> sys calls that avoided this, but these emulations were provided for
> compatibility with previous standalone programs.

Interesting.  I have used some PDP-8 systems as a user, but never did
much on the assembler level with it.  That capability would make it
the only small machine OS I'm aware of that had anything like that
kind of capability.  I've used quite a few small multitasking OSs on
various machines and although I've been pretty familiar with the innards
of some of them I have never seen that sort of thing.

It is, of course, not at all unusual on large machines (able to do a
nearly complete job of hardware emulation) - look at VM/370.  But this
isn't there to allow you to run ill-behaved application programs - it's
there to be able to run (and debug) different operating systems.

It does sound sort of limited though if it only caught single-character
hardware calls.

						Bruce C. Wright

les@chinet.chi.il.us (Leslie Mikesell) (04/21/89)

In article <13573@ncoast.ORG> allbery@ncoast.UUCP (Brandon S. Allbery) writes:

>The key words being "DOS Merge".  I have a 2MB AT386 with Oracle on it in my
>apartment at the moment (borrowed from the office); it runs fine.  If I want
>to use DOS, I'll use my own machine.  And with more DOS programs coming out
>for use under native Unix all the time, VP/ix or DOS-Merge looks less and
>less interesting all the time.  (It has a 60MB RLL drive, if you care.)  But
>it takes 4MB to run OS/2 plus SQL Server, from what I hear.

Yes, DOS merge is CPU intensive.  Here is what ps say about it: VP/ix on
an AT&T 6386 (done on April 18 so CPU time is for 5 days of mostly sitting
at a DOS ">" prompt in a background virtual terminal).

     UID   PID  PPID  C    STIME TTY      TIME COMMAND
    root     1     0  0  Apr 10  ?        4:04 /etc/init 
    root     3     0  0  Apr 10  ?       50:17 bdflush
     les  3663  3662 20  Apr 13  vt05    7347:16 dos 
[ other reasonable things deleted..]

Sar always reports 0 idle time when a dos process is running - even
without a dos program running.  It doesn't seem to affect unix performance
as badly as you would expect from this, though.  It would be a useful
product if it provided a real netbios interface to allow multiple sessions
to cooperate.

Les Mikesell

rat@madnix.UUCP (David Douthitt) (04/22/89)

Who says UNIX doesn't catch on?

I've seen numerous software which either emulates UNIX or took some ideas
from UNIX:

	Apple II:	Davex
			KIX
	IBM:		MKS Toolkit
			QNX
			Minix
			MSDOS
	Atari ST:	Minix
	CP/M:		CP/M
			ZCPR 3.3

And that's only the systems that emulate or "stole from" UNIX.  Consider
that both Apple and IBM, the two BIGGEST microcomputer makers, both now
carry UNIX for their high-end machines.

Seems to me UNIX has caught on - and has for some time - call it the
quiet revolution :-)

	[david]

-- 
!======= David Douthitt :::: Madison, WI =======!== The Stainless Steel Rat ==!
!  ArpaNet: madnix!rat@cs.wisc.edu              !                             !
!  UseNet: ...uwvax!astroatc!nicmad!madnix!rat  !  Mad Apple Forth:           !
!               {decvax!att}!                   !  The Madness starts here.   !

allbery@ncoast.ORG (Brandon S. Allbery) (04/22/89)

As quoted from <268@tree.UUCP> by stever@tree.UUCP (Steve Rudek):
+---------------
| I would LIKE for UNIX to win out against OS/2.  I feel that UNIX is at a
| tremendous disadvantage, though, for a reason that I've never seen
| discussed:  UNIX programmers and companies which are selling to the UNIX
| marketplace are so caught up in keeping their products easily portable between
| different UNIX machines that they generally make little or no effort to
| optimize their code via assembly.  They write everything in C and a well
| written 100% C program generally can't hold a candle to a well written
| assembly program--the compilers just aren't that good and they never will
| get that good since the UNIX compilers themselves are caught up in the
| portability trap.
+---------------

Portability doesn't necessarily preclude speed.  Look up the Free Software
Foundation's "gcc" compiler -- it does a *lot* of optimization, and is
written in (nominally) portable C code.

On the other hand, the portable applications can be moved from DOS to OS/2
just as easily as from UNIX-386 to UNIX-68K.  Blinding speed is great, but
if it doesn't run under your OS (or does so only in a limited way, as with
OS/2 (i.e. it singletasks, when OS/2 is multitasking) it doesn't benefit you
anywhere near as much as a slower one that *does* work in your environment.

Compilers in portability traps:  Granted that AT&T's Portable C Compiler is
the most common C compiler under Unix -- because it either comes with Unix
or is an easily available accessory -- it's not the only one.  People who
want to pay for it can get the Green Hills compiler for the 68K.  People who
are willing to do the work to maintain it can get GCC, mentioned above.
Other GOOD optimizing compilers exist, and programs compiled with them run
rings around programs compiled with pcc.  (I recall when a 68020 System V
system I used to use got an OS upgrade which included the OS and utilities
being recompiled with the Green Hills compiler instead of pcc.  It was
DEFINITELY noticeable.)  I'll also mention that MSC and Turbo C aren't
optimizing compilers to the same degree as are available for Unix;
optimization beyond "peephole" pcc-style optimization and some relatively
simple cases is decidedly non-trivial.  And you pay through the nose for it,
in one way or another.

Another point -- optimization technology took a big leap forward when RISC
processors became popular.  It *had* to.  I'm not sure how much of that has
leaked back into the world of CISC compilers, but when it does compilers
will get even better.

Let's not bring in specious arguments.  The tools exist to make C very
nearly as fast as assembler, if developers are willing to put some of their
profits into their compilers.

++Brandon
-- 
Brandon S. Allbery, moderator of comp.sources.misc	     allbery@ncoast.org
uunet!hal.cwru.edu!ncoast!allbery		    ncoast!allbery@hal.cwru.edu
      Send comp.sources.misc submissions to comp-sources-misc@<backbone>
NCoast Public Access UN*X - (216) 781-6201, 300/1200/2400 baud, login: makeuser

allbery@ncoast.ORG (Brandon S. Allbery) (04/22/89)

As quoted from <258@jwt.UUCP> by john@jwt.UUCP (John Temples):
+---------------
| Now that Microport appears to be defunct, I wonder how WordPerfect Corp. feels
| about entering the 386 Unix market.  Will this frighten off other prospective
| Unix developers like Lotus?  I suppose Unix will survive through it all, but
| can it succeed commercially?
+---------------

Why not?  You seem to think that the demise of Microport signals the demise
of 386 Unix; may I point out that with the number of companies selling 386
Unix (SCO, Interactive and AT&T, and formerly Microport, not to mention left-
field types) there are more than the market really needs.  It's unsurprising
that the companies which can't cut it fall by the wayside.  Nor is it
surprising to me that Microport was the first, given that they have acquired
a reputation for buggy systems.  (Well, systems with more experimental parts
than should be in a supposed production system.)  Of such parts are made a
free-market system.  (Not that people used to a Microsoft monopoly on the OS
would expect such things.  But on the other hand, it means that Microsoft
doesn't have an incentive to make things work better... until a competitor
does come along.  Maybe you'll be lucky and Unix will convince Microsoft to
put some thought into making OS/2 halfway reasonable; or maybe they'll pull
an AT&T and wilt.  [Explanation:  it's pretty well accepted that AT&T's
computer division collapsed when they suddenly found themselves having to
compete for a market.])

(On re-reading the above, it occurs to me that it looks a bit like a flame
of Unix detractors.  But if you'll think about it, you'll realize that it
cannot help but affect the situation that the only real competitor to MS-DOS
came out too late to have any effect.  Unix benefits from the competition in
the 386 market, just as it's getting a much-needed shake-up on a more
inclusive level from the competition between the OSF and UI.  And both Unix
*and* OS/2 will benefit from competition between them.)

WordPerfect Corp. sells not only a Microport version of WP 4.2, but also an
SCO Xenix version.  (And the "A"-word, which I'm sure you can guess by now...
;-) I'd guess that the SCO version sold better than the Microport version
anyway.  If WordPerfect wants the market, it'll release a 386/IX version of
WP; it wouldn't take much work to change the Microport one.  (The biggest
win of Unix is that this is so often true.)
-- 
Brandon S. Allbery, moderator of comp.sources.misc	     allbery@ncoast.org
uunet!hal.cwru.edu!ncoast!allbery		    ncoast!allbery@hal.cwru.edu
      Send comp.sources.misc submissions to comp-sources-misc@<backbone>
NCoast Public Access UN*X - (216) 781-6201, 300/1200/2400 baud, login: makeuser

egs@u-word.UUCP (04/25/89)

Written  8:25 pm  Apr 21, 1989 by allbery@ncoast.ORG in u-word:micro-ibmpc
+--------------
| As quoted from <258@jwt.UUCP> by john@jwt.UUCP (John Temples):
| +---------------
| | Now that Microport appears to be defunct, I wonder how WordPerfect Corp.
| | feels about entering the 386 Unix market.  Will this frighten off other
| | prospective Unix developers like Lotus?  I suppose Unix will survive
| | through it all, but can it succeed commercially?
| +---------------
| 
| Why not?  You seem to think that the demise of Microport signals the demise
| of 386 Unix; may I point out that with the number of companies selling 386
| Unix (SCO, Interactive and AT&T, and formerly Microport, not to mention left-
| field types) there are more than the market really needs.
+--------------

        I dislike seeing the continual writing off of Microport.  They
are still operating, still selling the product ( a very good one, and in
my opinion, much better one than the compition that I have seen ) and still
shipping product.

	Microport is in Chapter 11.  That means that they have more
liabilities than assests, but are trying to resolve the problem without
liquidating the firm.  Ie. they are reorganizing while continuing to
operate.  Other firms have done this successfully ( although I can't
think of any firms in the computer industry at the moment ) such as
Chrysler, PennCentral Corp., and many others...

	End of Microport defunct Flame.  Now on to some more reasonable
comments..

+--------------
|                                                           It's unsurprising
| that the companies which can't cut it fall by the wayside.  Nor is it
| surprising to me that Microport was the first, given that they have acquired
| a reputation for buggy systems.  (Well, systems with more experimental parts
| than should be in a supposed production system.)  Of such parts are made a
| free-market system.  
+---------------

	Yes, it is a free market system, and if Microport does fall to
the wayside, I will be saddened, but the system will have worked, and I
will use the product of one of the survivors..  ( since on the 386,
they are all pretty much interchangable.. )

+---------------
| WordPerfect Corp. sells not only a Microport version of WP 4.2, but also an
| SCO Xenix version.  (And the "A"-word, which I'm sure you can guess by now...
| ;-) I'd guess that the SCO version sold better than the Microport version
| anyway.  If WordPerfect wants the market, it'll release a 386/IX version of
| WP; it wouldn't take much work to change the Microport one.  (The biggest
| win of Unix is that this is so often true.)
+---------------

        WordPerfect may already work on Interactive 386/ix.  I have
copies of both, and have transfered several binaries built on the
Microport box onto the Interactive box ( version 1.0.6 ) and had them
run with no problems..  Granted, I have not tried WordPerfect on either
Microport or Interactive.

-----
Eric Schnoebelen,			JBA Incorporated, Lewisville, Tx.
egs@u-word.dallas.tx.us				...!killer!u-word!egs
	"...we have normality"..."Anything you still can't cope with is
therefore your own problem..."	-- Trisha McMillian, HHGG

crewman@bucsb.UUCP (JJS) (04/26/89)

In article <631@eecea.eece.ksu.edu> (Terry Hull) writes:
>
>In article <9286@watcgl.waterloo.edu> (Stefan M. Vorkoetter) writes:
>>
>>In article <268@tree.UUCP> (Steve Rudek) writes:
>>>
>>>Portability is the enemy of excellence.
>>>
>>
>>Bull!
>
>I agree.  I do not think that protability is the enemy of excellence.
>On the other hand, writing inefficient code in a high-level language
>will not produce a good spreadsheet to run on an 8088.  You might have
>a shot at getting an acceptable product if you did some careful
>analysis of the program and optimized appropriate sections.  You may
>need to write a routine in assembly, or you may just need to change
>the algorighm.  
>

I think that whether or not portability prevents excellent software
depends heavily on the state of technology.  It used to be that the
hardware was a crippling bottleneck; there was NO WAY to write portable
software for machines like the Apple II and other such 8-bitters.  We
have reached a point where hardware limitations are ALMOST not an issue.
I stress ALMOST with modern Unix workstations in mind, like the Suns,
Apollos, and HP's, on which fairly portable X-Window programs are
still just a bit too slow.  But as soon as hardware is no longer a limiting
factor, and the real bottleneck is the software, portability will no
longer be any kind of obstacle.  I believe this will be true in at most
a decade.

	Just an opinion.
		-- JJS
		[still working on that .signature]

stever@tree.UUCP (Steve Rudek) (04/28/89)

In article <2496@bucsb.UUCP>, crewman@bucsb.UUCP (JJS) writes:
> In article <631@eecea.eece.ksu.edu> (Terry Hull) writes:
> >
> >In article <9286@watcgl.waterloo.edu> (Stefan M. Vorkoetter) writes:
> >>
> >>In article <268@tree.UUCP> (Steve Rudek) writes:
> >>>
> >>>Portability is the enemy of excellence.
> >>>
> >>
> >>Bull!
> >
> >I agree.  I do not think that protability is the enemy of excellence.
> >On the other hand, writing inefficient code in a high-level language
> >will not produce a good spreadsheet to run on an 8088.  You might have
> 
> I think that whether or not portability prevents excellent software
> depends heavily on the state of technology.  It used to be that the
...
> still just a bit too slow.  But as soon as hardware is no longer a limiting
> factor, and the real bottleneck is the software, portability will no
> longer be any kind of obstacle.  I believe this will be true in at most
> a decade.

Well, I'm back!  Sorry I missed the interim discussion.

I don't get any particular thrill in stating a reality; I'm tired of having
to switch operating systems every 5 years and I really would prefer to see
UNIX and high level languages (not necessarily C) win out.

I read just the other day in _Computer_Systems_News_ that Microsoft's OS/2
for the 386 is being delayed because of the difficulty of converting
100,000+ lines of 286 assembly to 386 assembly.  This is *Microsoft*,
people: You know--the company which markets perhaps the best optimized 
C compiler in the world.  And they aren't writing OS/2 in C; or even following
the UNIX lead and doing it 95% in C with bottlenecks in assembly.  You
suppose they're just stupid?

I understand that VMS (the preeminent OS for the Digital VAX minicomputer
line) is primarily or entirely in assembly as well.  I guess they missed the
boat?

No.  The fact is that UNIX on a VAX doesn't even compare, from a pure
performance standpoint, with VMS.  And the efficiency of UNIX on the 386 is
almost certainly going to look rather sickly when compared to a mature
version of 386 OS/2.

Does anyone remember the UCSD "P-Code" operating system which at one point
was being touted as superior to MS-DOS because it allowed developers to sell
basically the same software for disparate machines such as the Apple II and
the IBM PC?  Just a couple small problems: (1) it was dog slow and (2) it
couldn't offer products to compete with things like 1-2-3 which insisted on
"cheating" and doing non-portable stuff like direct screen writes.

Sure, you can always say:  "Throw more hardware at the problem!  If you can't
do it on a 286 use a 386.  C may not be fast enough on a Micro but what
about a Mini?  No?  How about a Cray supercomputer?"  Unfortunately, better
hardware costs (almost exponentially) more money and so there is *always* going
to be a desire for the BEST software to push the limits of the hardware--and
that just can't be done in a "portable" manner. Hardware will be "fast
enough" to obviate the advantage of assembly language shortly after the
appetite for bigger and better software is satisfied.  But you all want
windows, don't you?  And next year you'll want voice recognition and speech
synthesis.  So do I.  The growth in expectations will go on *forever*.  Sure,
better coding and algorithms may give a C program a temporary lead.  But
no matter what can be done in C it can be done BETTER in assembly.  It
sorrows me that there is this tradeoff.  But I'll repeat:  Portability IS the
enemy of excellence.
----------
Steve Rudek ucbvax!ucdavis!csusac!tree!stever  ames!pacbell!sactoh0!tree!stever

terry@eecea.eece.ksu.edu (Terry Hull) (04/29/89)

In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>
>Well, I'm back!  Sorry I missed the interim discussion.
>
>I read just the other day in _Computer_Systems_News_ that Microsoft's OS/2
>for the 386 is being delayed because of the difficulty of converting
>100,000+ lines of 286 assembly to 386 assembly.  This is *Microsoft*,
>people: You know--the company which markets perhaps the best optimized 
>C compiler in the world.  
Hold on here.  The GNU C compiler is certainly more highly optimizing than 
the Microsoft C compiler.  

[stuff deleted]

>And the efficiency of UNIX on the 386 is
>almost certainly going to look rather sickly when compared to a mature
>version of 386 OS/2.
Maybe so, but who can wait that long?  If we wait for a mature OS/2
written in '386 assembly language, UNIX will be running on '486s and
still providing more functionality than OS/2 on that "tired, old,
broken-down '386."  How long is it going to take to get a "mature"
OS/2 for the '386?  How long does it take to get all the bugs out of
100,000 lines of assembly code?  How 'bout getting OS/2 to run on one
of the new inexpensive RICS machines?  Do we write 250,000 lines of
assembly to accomplish that task?  

[stuff deleted]

>Sure, you can always say:  "Throw more hardware at the problem!  If you can't
>do it on a 286 use a 386.  

At the current time, I think MIPS are cheaper than programmer hours.  
If that were not true, would not all of our applictions would be written in
assembly language?  If programmer time was cheaper and MIPS were more
expensive, languages like lisp would never be used.  They take less
programmer time and more MIPS than an equivalent program in C.  How
about 4th generation database languages?  They are certainly not as
efficient as assembly language, yet are being widely used.  Anybody
want to take a shot at writing GNU Emacs in 8088 assembly so that it
will be smaller and faster?  Not me.  I buy the hardware to support
the real thing.  

You must choose what path that you wish to take.  If you want to wait
for assembly language programmers to code OSs and applications in
assembly language so that you can buy less memory and disk space, that
is fine.  I would rather buy the disk space for things like GNU Emacs,
TeX, troff, and news that will enable me to use the currently
available (note: portable) implementations.  When a "mature OS/2" is
available for the '386, I will be happily banging away on my '486 or
SPARC or MC88000, and I will still be getting more work done than the
guy who waited for OS/2.  After all, isn't the purpose of computing
getting more done in less time?

>Unfortunately, better
>hardware costs (almost exponentially) more money and so there is *always* going
>to be a desire for the BEST software to push the limits of the hardware--and
>that just can't be done in a "portable" manner. 

Unfortunately, when there are no longer enough programmer hours
available in the world to implement this "BEST" software in assembly
language, the definition of "BEST" will no longer be fastest and
smallest.  If the computer that you are writing this software for is
no longer produced by the time you finish the project, can this
software possibly be called "BEST?"  What ever happened to the
"timely" part of "BEST"?

>But I'll repeat:  Portability IS the
>enemy of excellence.

To quote another USENETTER, "Bull."
-- 
Terry Hull 
Department of Electrical and Computer Engineering, Kansas State University
Work:  terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!terry
Play:  tah386!terry@eecea.eece.ksu.edu, rutgers!ksuvax1!eecea!tah386!terry

nelson@sun.soe.clarkson.edu (Russ Nelson) (04/29/89)

In article <635@eecea.eece.ksu.edu> terry@eecea.eece.ksu.edu (Terry Hull) writes:

   Anybody want to take a shot at writing GNU Emacs in 8088 assembly
   so that it will be smaller and faster?

You'll probably laugh at me, but that's exactly what Freemacs is.  And, given
that the original target machine only had 192K, I did as much as I could
with as much as I had.
--
--russ (nelson@clutx [.bitnet | .clarkson.edu])
Chrysler gets a bailout -- the rich get theirs and everyone else gets trickles.

fargo@pawl.rpi.edu (Irwin M. Fargo) (04/29/89)

In article <635@eecea.eece.ksu.edu> Terry Hull writes:
>In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>>
>>But I'll repeat:  Portability IS the
>>enemy of excellence.
>
>To quote another USENETTER, "Bull."

One way we might be able to clear things up here is if we change the phrase,
"Portability is the enemy of excellence" to "Portability is the enemy of
speed"  By making programs portable, you obviously have to make a compromise
in terms of speed.  C programs are NEVER going to run faster than Assembler
programs on the SAME machine.  I am all for portability, but fast portability.
In my opinion, as software techniques become more and more refined, we will
be able to create more powerful optimizing compilers for less money.  I can
envision a day (a day quite some time away) when a compiler will be able to
create compiled code that runs only marginally slower than the same Assembler
program.  If anything, I feel THAT is something we should shoot for.  If we
can reach that goal, then the trade-off between portability and speed will
become almost unnecessary.

Thank you and happy hunting!             Actually: Ethan M. Young
  ____ [> SB <]    "Travel IS life"      Internet: fargo@pawl.rpi.edu
 /__   -=>??<=-       - Irwin M. Fargo   Bitnet (??): usergac0@rpitsmts.bitnet
/   ARGO : 3000 years of regression from the year 4990

john@jwt.UUCP (John Temples) (04/30/89)

In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>This is *Microsoft*,
>people: You know--the company which markets perhaps the best optimized 
>C compiler in the world.  And they aren't writing OS/2 in C [...] You suppose
>they're just stupid?

I doubt they're stupid.  But just because they're successful at marketing
their products doesn't mean their products are implemented the "best" way.  I
saw comments in the MKS Newsletter stating that they felt certain features in
OS/2 were not implemented the "best" way -- because UNIX had already done it
that way, and Microsoft didn't want OS/2 to be like UNIX.  Yes, this is
*Microsoft*, the company that finally came out with a multitasking OS...on a
CPU that has been obsolete for years.  Yes, this is *Microsoft*, the company
that can't do DOS multitasking yet, even though UNIX has been doing DOS
multitasking on the 386 for years.  And as for the "best optimized C compiler"
(I assume you meant "best optimizing"), yes, it has a good optimizer -- when
it generates good code.  I've put tens of thousands of lines of code through
MSC 5.0/5.1.  After initial experiences with 5.0, I wouldn't use it.  5.1
fixed many optimizer bugs, but you still have to be careful with the
optimizer.  Yes, Microsoft is a big, successful company.  But I don't see any
Microsoft products that I think are incredibly innovative or unusual.  The old
"ten million fans can't be wrong" argument doesn't wash with me. 
  
>And the efficiency of UNIX on the 386 is
>almost certainly going to look rather sickly when compared to a mature
>version of 386 OS/2.

I've never used OS/2, but I've seen many comments indicating that it's quite
slow -- slower than UNIX, even.  I would imagine that it will improve in the
future, but to the point where it will make UNIX look "sickly"?  I doubt it. 
And it's just as likely that UNIX's performance on the 386 will continue to
improve as well.

>Does anyone remember the UCSD "P-Code" operating system which at one point
>was being touted as superior to MS-DOS because it allowed developers to sell

I really don't think comparing P-code, which is interpreted, to compiled
C is valid at all.  No, C will probably never be as fast as assembly.  But
improvements in compiler and optimization technology will continue to close
the gap.
-- 
John Temples - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!jwt!john

kc@rna.UUCP (Kaare Christian) (05/01/89)

In article <274@tree.UUCP>, stever@tree.UUCP (Steve Rudek) writes:
> 
> I read just the other day in _Computer_Systems_News_ that Microsoft's OS/2
> for the 386 is being delayed because of the difficulty of converting
> 100,000+ lines of 286 assembly to 386 assembly.  This is *Microsoft*,
> people: You know--the company which markets perhaps the best optimized 
> C compiler in the world.  And they aren't writing OS/2 in C; or even following
> the UNIX lead and doing it 95% in C with bottlenecks in assembly.  You
> suppose they're just stupid?

It's pretty common knowledge that OS/2 *IS* being rewritten in C. Microsoft
doesn't want to miss out on RISC developments, etc. Their goal is
ubiquity; portablity combined with excellence are required to attain that goal.

> I understand that VMS (the preeminent OS for the Digital VAX minicomputer
> line) is primarily or entirely in assembly as well.  I guess they missed the
> boat?
> 
> No.  The fact is that UNIX on a VAX doesn't even compare, from a pure
> performance standpoint, with VMS.  And the efficiency of UNIX on the 386 is
> almost certainly going to look rather sickly when compared to a mature
> version of 386 OS/2.

The VMS vs. UNIX performance wars have raged over the years, and they
don't need to be reignited by someone who, apparently, is not very
familiar (note the word "understand" above) with VMS. In summary, VMS
is better at some things, Unix better at others. VMS has some deep
hooks for database and transaction sorts of things, and it has some
apps that do those sorts of things quite well. Unix is far more
flexible, is better at supporting a heterogenous workgroup, and it can
task switch much faster. BTW, there is no such thing as "pure
performance" -- its always "performance of chore X." BTW2, a mature 386
OS/2 may have higher performance than Unix, but the difference will
probably be due to what features are in "mature 386 OS/2", and how they
are implemented. Lightweight processes (threads) can be a win, because
you can avoid time consuming context switches. (But apps that rely on
threads will be very non-portable). Single user operation can also be
an efficiency win. (But it has obvious drawbacks and limitations.) And
a lack of security high and low can also be an efficiency win. (But
again, the drawbacks are obvious.) If "mature 386 OS/2" is equal or
better (to Unix) in functionality *and* performance, then few will use
Unix.  Even Microsoft doesn't make this claim, they simply think OS/2
is better suited for business users.

 
>       Sure,
> better coding and algorithms may give a C program a temporary lead.  But
> no matter what can be done in C it can be done BETTER in assembly.  It
> sorrows me that there is this tradeoff.  But I'll repeat:  Portability IS the
> enemy of excellence.
> ----------
> Steve Rudek ucbvax!ucdavis!csusac!tree!stever  ames!pacbell!sactoh0!tree!stever

It's certainly possible to rewrite most C programs in assembler and make them
run faster, but the improvement is usually modest. On a few special things, the
improvement may be significant. But more often, its better algorithms (i.e.
using a better search or sort) or different algorithms (writing
directly to the screen buffer, instead of using bios calls) that make
real differences.

Experience has shown that you can usually increase a program's performance
by rewriting it. Many assembly programs have had their performance
significantly boosted by rewritting them in C, because it becomes much
easier to use more sophisticated methods. Similarly, C programs can be
rewritten in C or assembly to be more efficient.  As has been shown
many times, programmer productivity is fairly constant across
languages, measured in lines of debugged code per day. (There is a huge
individual variation, some people are better than others, but most
people will write N lines a day, be they lines of assembly, C, Lisp, or
4GL database.) The higher level languages get you more functionality
(and usually more portablity) per line, at lower execution efficiency.
Just where you make your tradeoff depends on your market, the skills of
your programmers, etc. Excellent programs can be written in 4GL or in
assembly. A given program can be written faster and more reliably in C
(than in assembly), but if you are planning to sell several hundred
thousand copies, and you are betting the company, then you will want to
put a great deal of effort into micro efficiency, which is where
assembly language excells.

A related note: Microsoft and many other PC software companies are
finding it increasingly important to make their apps available on a
variety of platforms: DOS, OS/2, Windows, Unix, the Mac, etc. The
emerging approch is to write a portable core "engine" (usually in C)
plus glue and user interface routines for each supported system. Thus
80% of the app is "portable", the other 20% is not portable.  Most of
these companies are planning to write "excellent" software. It's a
no-brainer to decide on C (because it is relatively portable) for the
core engine.

Portability isn't an absolute. Few (if any) major apps are 100% portable.
But something that is 80 or 90% portable is far better than something
in assembly that is 0% portable, unless you have unlimited budget and
time for development.

Portability isn't the enemy of excellence. Rather, the enemies are some
of the following: limited programmer skills, limited vision by
software designers, restricted development budgets and times, choosing the
wrong tools (i.e. using C where it should be asm. or vice versa), and
the constantly moving targets. More portable apps can distribute the
cost of development across multiple platforms, can gain from the
synergy of multiple platforms, and are the most economical (but not
the only) way to build excellent software.

Kaare Christian
kc@rockefeller.edu

stever@tree.UUCP (Steve Rudek) (05/02/89)

In article <2333@rpi.edu> fargo@pawl.rpi.edu (Irwin M. Fargo) writes:
>In article <635@eecea.eece.ksu.edu> Terry Hull writes:
>>In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
>>>
>>>But I'll repeat:  Portability IS the
>>>enemy of excellence.
>>
>>To quote another USENETTER, "Bull."
>
>One way we might be able to clear things up here is if we change the phrase,
>"Portability is the enemy of excellence" to "Portability is the enemy of
>speed"  By making programs portable, you obviously have to make a compromise
It's the enemy of speed AND size AND functionality.  There is very
definitely a "good enough for UNIX" mentality.  For example, I read an
article about Wordperfect Corporation's porting efforts a couple months
ago.  The Wordperfect representative described how the unix ports are
finished month's ahead of the ports to the Macintosh, MS-DOS, etc because (to
paraphrase) "the UNIX ports can be done in C."  Well, yes, BUT they *could*
also be done in assembly.  Or, to be more fair, *all* the ports could be
done in C.  Except that we all know that if Wordperfect tried to do that
their product would flop!

It is a lot harder for the programmer to judge resource consumption
under unix ("Maybe it just *seems* slow because other users are hogging
system resources.").   But mainly I think there is a problem with significantly
lowered expectations by unix users--which permits a laziness on the part of
the vendors.  And the vendors all think:  *portability* is what UNIX is ALL
about!  At this point I'd guess there are MANY TIMES OVER as many 386/UNIX
users as OS/2 users.  But you won't see Wordperfect go out of their way to
do an assembly version of Word Perfect for 386 unix users.  AND, more
tellingly, you won't see them try and foist off a "portable" C version on
OS/2 customers.

THAT mentality is an horrendous problem for UNIX acceptance as it battles
OS/2 for the moribund DOS market.  So UNIX mainly gets second rate,
"generic" products whereas OS/2 gets all the "WOW!  I'm gonna buy OS/2 to
run THAT!" products.  Let me present my point a little differently:
virtually *every* application which is sold for 386 UNIX can be ported to
OS/2 with relative ease.  Hardly *any* application which is sold for OS/2
can be ported to 386 UNIX without a MAJOR effort.  And usually when the
vendor does elect to make such a port to UNIX it considers it "acceptable"
to give up significant performance and functionality in return for
future portability.

A very sad state of affairs.
-- 
----------
Steve Rudek  {ucbvax!ucdavis!csusac OR ames!pacbell!sactoh0} !tree!stever

chris@utgard.UUCP (Chris Anderson) (05/02/89)

In article <13595@ncoast.ORG> allbery@ncoast.UUCP (Brandon S. Allbery) writes:
>
>Another point -- optimization technology took a big leap forward when RISC
>processors became popular.  It *had* to.  I'm not sure how much of that has
>leaked back into the world of CISC compilers, but when it does compilers
>will get even better.
>
>Let's not bring in specious arguments.  The tools exist to make C very
>nearly as fast as assembler, if developers are willing to put some of their
>profits into their compilers.

Brandon makes a very good point here.  Optimizing compilers *are* the
future of computers.  And developers are putting their profits into
compiler technology.  In the world of RISC, the chips are all within
and eyeblink of each other.  What makes a difference is the compiler
technology that the companies put into their products. 

Sequent is a company that is making a big push on compiler technology.
They compete head-to-head with RISC computers, using a '386 cpu.  When
their compiler technology starts to trickle-down, you should see a 
big jump in how fast programs run.

Hardware is always going to be developed faster than software can
keep up.  Optimizing compilers are one way of keeping up.

Chris
-- 
| Chris Anderson, 						       |
| QMA, Inc.		        email : {csusac,sactoh0}!utgard!chris  |
|----------------------------------------------------------------------|
| Of *course* I speak for my employer, would he have it any other way? |

afscian@violet.waterloo.edu (Anthony Scian) (05/02/89)

In article <552@rna.UUCP> kc@rna.UUCP (Kaare Christian) writes:
>In article <274@tree.UUCP>, stever@tree.UUCP (Steve Rudek) writes:
>> 
>> I read just the other day in _Computer_Systems_News_ that Microsoft's OS/2
>> for the 386 is being delayed because of the difficulty of converting
>> 100,000+ lines of 286 assembly to 386 assembly.  This is *Microsoft*,
>> people: You know--the company which markets perhaps the best optimized 
>> C compiler in the world.  And they aren't writing OS/2 in C; or even
>> following the UNIX lead and doing it 95% in C with bottlenecks in assembly.
>> You suppose they're just stupid?

Yes. A company that ignores history is doomed to repeat it.
Maybe they'll invent SNOBOL in five years. It is beyond comprehension
how anybody can develop an OS in assembly language this day and age.
Where have the OS architects at Microsoft been for the last two decades?
They should have started in C (even if their compilers were weak) and
had a parallel development of a good stable compiler with good
code generation but instead we have OS/2 coded in assembly and MSC 5.1.

>It's pretty common knowledge that OS/2 *IS* being rewritten in C. Microsoft
>doesn't want to miss out on RISC developments, etc. Their goal is
>ubiquity; portablity combined with excellence are required to attain that goal.

Where is the excellence in a product that will be six years too late?
With the conversion to C, we can expect the 386 version by 1993 (if we're
lucky). How can a RISC version of OS/2 run the binaries from MS-DOS?
Binary compatibility is the cornerstone of the Microsoft "vision"(?).
We will also have to factor in the time required for Microsoft to write
a good stable compiler that can optimize code besides null loops.
(they haven't got one for the 8086 after how many years?)

The future is looking better for UNIX thanks to Microsofts inept actions.

//// Anthony Scian afscian@violet.uwaterloo.ca afscian@violet.waterloo.edu ////
"I can't believe the news today, I can't close my eyes and make it go away" -U2

dold@mitisft.Convergent.COM (Clarence Dold) (05/02/89)

in article <369@utgard.UUCP>, chris@utgard.UUCP (Chris Anderson) says:

> Sequent is a company that is making a big push on compiler technology.
> They compete head-to-head with RISC computers, using a '386 cpu.  When
                                                       ^
						       |
		I don't think Sequent sells a single-386 box, more like 4-30.

> their compiler technology starts to trickle-down, you should see a 
> big jump in how fast programs run.
-- 
---
Clarence A Dold - dold@tsmiti.Convergent.COM		(408) 434-5293
		...pyramid!ctnews!tsmiti!dold
		P.O.Box 6685, San Jose, CA 95150-6685	MS#10-007

davidsen@steinmetz.ge.com (Wm. E. Davidsen Jr) (05/02/89)

In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:

| I understand that VMS (the preeminent OS for the Digital VAX minicomputer
| line) is primarily or entirely in assembly as well.  I guess they missed the
| boat?

	You may understand it, but it's not true. Depending on the
version of the o/s, 40-60% of the system is written in BLISS, a VAX
speciffic high level language. I checked with our VMS guru who has
source fiche, and major chunks are in BLISS.
| 
| No.  The fact is that UNIX on a VAX doesn't even compare, from a pure
| performance standpoint, with VMS.  And the efficiency of UNIX on the 386 is
| almost certainly going to look rather sickly when compared to a mature
| version of 386 OS/2.

	Wow! The people I talked to here and elsewhere in the company
thought that the better performance of Ultrix for "general processing"
like mail, word processing, etc, was important. They don't know that
their performance isn't pure.

	Slight sarcasm aside, even the sysmgrs who hate UNIX with a
passion agree that it does better under a heavy office processing load.
Database stuffmay run a little faster since part of it is coded in the
o/s, but you pay a penalty in filesystem performance for other file
access. Also, for some time (over a year, but I don't know how much
longer) some of the magic file types couldn't be copied over DECNET.

	Having coded some fairly large (20-100k line) programs in
various languages, assembler isn't going to make a big difference in
speed or size over a reasonable C (or other) compiler. With a good
algorithm for the job you should see less than 2:1 in speed and 20% in
size (based on recoding a few critical routines).

	On the subject of doing things like direct screen writes, and
ignoring the fact that it limits you to a single tasking display, it was
neat stuff on CP/M and on a PC, but using high level languages and o/s
calls I can update the display now as fast as anyone could want. 

	Since the display hardware only updates every 35ms (or so) and
the eye won't notice anything much under 40ms, why worry about doing
something which is already close to the threshold of perceptability? If
you can't see that it's faster, why do it?

	The cost of writing anything large in assembler is so high the
vendors are moving away from it. It makes them slower to market, and
means they have to charge more. 

	When the customer could see/feel the better performance it made
sense as a sales technique. When the customer couldn't run larger code
in his limited memory it made sense. When 90% of the computer market
was one o/s (first CP/M, the PC-DOS) portability was less of an issue.
When compilers generated garbage code it made sense. Those days are
gone, and not missed by those of us who lived though them.

	Today it's much harder to justify the time to write and debug
in assembler, and that trend is getting stronger. Save assembler for
the module at the innermost loop, a few interrupt handlers, and things
like that. All of the arguments for assembler are based on conditions
which no longer exist, and you assume that the average programmer can
write assembler as well as s/he write C. Having taught both to people
who already knew one other language, it isn't so; being able to write
good code in assembler is an art, and the line between competent code
and good code will never be crossed by most people.

-- 
	bill davidsen		(wedu@crd.GE.COM)
  {uunet | philabs}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

sawant@nunki.usc.edu (Abhay Sawant) (05/03/89)

In article <13546@watdragon.waterloo.edu> afscian@violet.waterloo.edu (Anthony Scian) writes:
>In article <552@rna.UUCP> kc@rna.UUCP (Kaare Christian) writes:
>>In article <274@tree.UUCP>, stever@tree.UUCP (Steve Rudek) writes:
>>> This is *Microsoft*,
>>> people: You know--the company which markets perhaps the best optimized 
>>> C compiler in the world.  And they aren't writing OS/2 in C; or even
>>> You suppose they're just stupid?
>
>Yes. A company that ignores history is doomed to repeat it....
>It is beyond comprehension
>how anybody can develop an OS in assembly language this day and age.

Objection 1: Microsoft C is certainly not the best optimized C
compiler in the world.  Apart from evidence from benchmarks etc.
(after discounting for stuff like the Sieve which are worn out enough
for the compiler to "recognize" them), the sheer *intelligence* of TC
code is mind blowing.  Try disassembling a .exe created with msc and
compare with what you see with tc.  That is what really sold the issue
to me (apart from tc being better whenever *i* did a comparison).

Objection 2: I think it's childish bashing microsoft along the lines
"every sane person on the planet knows writing OS's in C is the best,
surely they're being outrageously stupid to be writing on in
assembly".  Like everything else about writing an OS, the decision of
assembly vs. C is one of the tradeoffs before the designer.  I think
there is little doubt that assembly offers speed while losing
portability and becoming more expensive (time, people, money).  You may 
choose to differ in the relative importance of these two to you: it's 
just that the people out there in Microsoft chose to think different.  
There is no absolute answer to this tradeoff, as with essentially any 
engineering tradeoff.

I'm pretty certain that the performance which MS-DOS extracts out of
it's hardware could not have been got by writing in C.  So writing 
MS-DOS in assembly (all the way through version 3.3 by which time
every PC in the world had all the memory a monster might need) cost them 
more but the bux they made on near 10 million PCs seems to suggest 
that the extra effort was well deserved.  A monstrosity of a slow, 
bulky OS would've certainly impeded the success of the PC.  *I*, think 
it was a great tradeoff.  In engineering, "principles" are pretty
meaningless: getting the job done well is all that matters (that is a
principle).  And what is "well" is a function of the design objectives 
at hand.

*I* think writing OS/2 in assembly is an excellent idea.  I've seen a
big application like Ventura yearn for more computing punch on a 16 MHz 
286: it's pretty clear that even with a single user single task machine, 
we need all the punch that we can get.  So i'm thrilled that microsoft 
felt there was enough money in the PS/2 market to justify spending a 
lot on hand-writing in assembly.  I'd like a lot of bang for my buck 
when i buy a PS/2 (not yet plonked the cash :-) ).

Just my opinions...
-------------------------------------------------------------------------------
You're never too old to have a happy childhood.

ajay shah (213)745-2923 or sawant@nunki.usc.edu
_______________________________________________________________________________

tim@cit-vax.Caltech.Edu (Timothy L. Kay) (05/04/89)

In article (Anthony Scian) writes:
>Yes. A company that ignores history is doomed to repeat it.
>Maybe they'll invent SNOBOL in five years. It is beyond comprehension
>how anybody can develop an OS in assembly language this day and age.
>Where have the OS architects at Microsoft been for the last two decades?

That's just it.  In my (uninformed) opinion, Microsoft doesn't have
any *architects*.  According to the introduction to _Inside OS/2_,
this guy Letwin (the author of the book and "architect" of OS/2) is
somebody Bill Gates met at a hacker conference and hired immediately.
I doubt that he has any formal OS experience.  It is all just a hack.

chris@utgard.UUCP (Chris Anderson) (05/04/89)

In article <663@mitisft.Convergent.COM> dold@mitisft.Convergent.COM (Clarence Dold) writes:
>in article <369@utgard.UUCP>, chris@utgard.UUCP (Chris Anderson) says:
>
>> Sequent is a company that is making a big push on compiler technology.
>> They compete head-to-head with RISC computers, using a '386 cpu.  When
>                                                       ^
>						       |
>		I don't think Sequent sells a single-386 box, more like 4-30.

Whoops, you're right.  Actually, they do make a two-processor machine.
The point I was trying to make is that they are competing against RISC
architecture with a CISC chip.  Of course, they use some really interesting
tricks to make 386's handle that load, but at the same time compiler
technology is what they are concentrating on right now for the future.

Thanks for the correction!

Chris

-- 
| Chris Anderson, 						       |
| QMA, Inc.		        email : {csusac,sactoh0}!utgard!chris  |
|----------------------------------------------------------------------|
| Of *course* I speak for my employer, would he have it any other way? |

mcdonald@uxe.cso.uiuc.edu (05/04/89)

>	Having coded some fairly large (20-100k line) programs in
>various languages, assembler isn't going to make a big difference in
>speed or size over a reasonable C (or other) compiler. With a good
>algorithm for the job you should see less than 2:1 in speed and 20% in
>size (based on recoding a few critical routines).
I certainly call 2:1 speed difference a "big difference"!
If AutoBad were twice as fast, a 10 hour "hide" would be
only 5 hours! A three month molecular dynamics calculation would be
only six weeks!!!!!!!



>	Since the display hardware only updates every 35ms (or so) and
>the eye won't notice anything much under 40ms, why worry about doing
>something which is already close to the threshold of perceptability? If
>you can't see that it's faster, why do it?
I find it tough to generate a whole screen in 35 msec even on a fast
386 machine using carefully hand-optimized assembler code! Maybe
you could do a horizontal line routine in C and get it fast enough,
but I have never actually seen C screen graphing routine that
were anwhere near optimal.

>Save assembler for
>the module at the innermost loop, a few interrupt handlers, and things
>like that.
         And those critical screen graphics routines!

Doug McDonald

madd@bu-cs.BU.EDU (Jim Frost) (05/08/89)

In article <274@tree.UUCP> stever@tree.UUCP (Steve Rudek) writes:
|This is *Microsoft*,
|people: You know--the company which markets perhaps the best optimized 
|C compiler in the world.

This is very arguable.  It isn't even the best optimizing C compiler
for MS-DOS, much less the best in the world.  The MIPS C compiler
might very well be the best (it consistently makes tighter code than
people whose job it is to make tight code), but of course that's not
the same environment.  There are a variety of other compilers which
produce better code than MSC, although few under MS-DOS given its
memory constraints.

|And the efficiency of UNIX on the 386 is
|almost certainly going to look rather sickly when compared to a mature
|version of 386 OS/2.

That depends on what you're looking for.  The 80386 has been out for a
pretty long time, yet OS/2 doesn't run there yet.  IBM has already
built prototypes of 80486 machines (an 80486-equipped model 70 was
said to have "twice" the performance of the normal model at the same
clock rate).  When will OS/2 run native on the '486?  UNIX will be
running native before most people can get their hands on a
'486-equipped machine.  This lag is very detrimental.

|sorrows me that there is this tradeoff.  But I'll repeat:  Portability IS the
|enemy of excellence.

You are wrong.  A properly written portable program is usually easier
to maintain and debug than a non-portable one.  It may not be quite as
fast, but it is my opinion that high reliability is better for the
customer than optimum speed.  What difference does it make that your
program runs 20% faster if it breaks twice as often or doesn't
necessarily get the right answer?

To the developer, a portable program results in a much wider market, a
big win.  To the customer, a portable program means that it's more
likely to be supported on newer, faster hardware than a nonportable
one (eg OS/2 versus UNIX).  To the programmer, a portable program
probably means modularity, allowing you to snap in replacement code
which is better, stronger, faster (and perhaps takes advantage of
particular hardware) without breaking the application.

It used to be the case where hardware was so expensive that you tried
VERY hard to get 100% utilization.  Hardware costs are so much lower,
and performance so much higher, that it makes more sense to focus your
attention on other matters (such as interface and portability) now.
As hardware continues to improve, I believe you'll see this trend
continue.

jim frost
madd@bu-it.bu.edu

madd@bu-cs.BU.EDU (Jim Frost) (05/08/89)

In article <2333@rpi.edu> fargo@pawl.rpi.edu (Irwin M. Fargo) writes:
|C programs are NEVER going to run faster than Assembler
|programs on the SAME machine.

This is incorrect.  Very good optimizing C compilers have changed
this.  In one recent case, a (good!) programmer using a Silicon
Graphics machine spent a good deal of time writing a speed-critical
section in machine language only to have the C compiler produce code
which was one instruction smaller and ran faster.  He's since given up
trying to out-do the compiler.  Never say "never".

|I can
|envision a day (a day quite some time away) when a compiler will be able to
|create compiled code that runs only marginally slower than the same Assembler
|program.  If anything, I feel THAT is something we should shoot for.  If we
|can reach that goal, then the trade-off between portability and speed will
|become almost unnecessary.

While it is not the case for every architecture, that day has already
arrived for several and in some cases the compilers are better than
the programmers, producing faster code than the best efforts of
assembly programmers.  This shouldn't be surprising; how many people
can actually figure out optimal register allocation beyond a few
hundred lines of code?  A few thousand?  Few to none, I would expect,
and that is just one optimization technique.  Add to this the fact
that higher-level languages tend to be easier to debug than assembler
(about the same number of errors per line of code versus per CPU
instruction, quite a difference most of the time) and you see that
coding in assembler isn't very practical.

[This probably isn't the right forum to discuss these things in....]

jim frost
madd@bu-it.bu.edu

mikes@ncoast.ORG (Mike Squires) (05/08/89)

I ported Gary Perlman's |STAT5.3 package to XENIX V 386 v2.3.1 on an AMI
20MHz 386, 64k cache, 4MB RAM, 150MB ESDI drive.  The examples run on the
386 box under XENIX took 7 seconds; the same under PC-DOS on an IBM PS/2
50Z took 44 seconds.  

The Dhrystone benchmark on the same system is 7000 for a 386 binary,
4000 for a 286 binary, 5000 for an 8086 binary.  I can also execute
programs that take up to 4400K of memory (and have - I modified the
Perlman regression routine to take more than 20 variables.

I also ran compress 4.0 on a PS 2/70, RS 6000 under XENIX (an old 68000
system), and on the 386 box.  The RS 6000 compressed a 512K file faster
than the 386 box under PC-DOS.  I believe the major reason for this is
that the 6000 is running Quantum 2080 HD's which are quite fast and that
the PS 2/70 uses a fairly slow ESDI drive (no cache used).  The UNIX file
system also seems faster than PC-DOS.

As an old mainframe person (CDC 3600/3800/6600/Cyber) I like working with
sources and with an operating system with a good set of utilities.  UNIX/
XENIX has that; PC-DOS definitely does not without massive extensions.
OS/2 286 is just too slow.  I can move my environment from one hardware
platform to the next with only minor hassles (the stuff really IS
portable!).

pcg@aber-cs.UUCP (Piercarlo Grandi) (05/11/89)

In article <30789@bu-cs.BU.EDU> madd@bu-it.bu.edu (Jim Frost) writes:

    In article <2333@rpi.edu> fargo@pawl.rpi.edu (Irwin M. Fargo) writes:
    |C programs are NEVER going to run faster than Assembler
    |programs on the SAME machine.
     
    This is incorrect.  Very good optimizing C compilers have changed
    this.  In one recent case, a (good!) programmer using a Silicon
    Graphics machine spent a good deal of time writing a speed-critical
    section in machine language only to have the C compiler produce code
    which was one instruction smaller and ran faster.  He's since given up
    trying to out-do the compiler.  Never say "never".

Agreed... One of major reasons that have always been given for using HLLs is
that a code generator written by an excellent (like those that should do
code generators) programmer tipically will be better in speed and reliability
than that done by a average (like most application ones) programmer in
assembler.
    
    |I can
    |envision a day (a day quite some time away) when a compiler will be able to
    |create compiled code that runs only marginally slower than the same Assembler
    |program.  If anything, I feel THAT is something we should shoot for.  If we
    |can reach that goal, then the trade-off between portability and speed will
    |become almost unnecessary.
    
    While it is not the case for every architecture, that day has already
    arrived for several and in some cases the compilers are better than
    the programmers, producing faster code than the best efforts of
    assembly programmers.  This shouldn't be surprising; how many people
    can actually figure out optimal register allocation beyond a few
    hundred lines of code?  A few thousand?  Few to none, I would expect,
    and that is just one optimization technique.

Let me add one story... The Burroughs mainframes have a very compact, high
level (but RISCy) instruction set, designed for Algol. No assembler program
was meant to be ever written for that machine. Unfortunately, the very first
Algol compiler *had* to be written (for bootstrapping) in assembler, and so
it was.  Eventually, it was (reluctantly) recoded in Algol, and it turned
out that it was faster and smaller. And this on a machine with a relatively
high level machine language...  Ah, by the way, this story is dated late
fifties -- early sixties... :-) :-(.  [note: I don't remember where I read
this, but probably in Organick's book.]
    
    [This probably isn't the right forum to discuss these things in....]

Followups redirected to comp.arch.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

madd@bu-cs.BU.EDU (Jim Frost) (05/13/89)

In article <13630@ncoast.ORG> mikes@ncoast.ORG (Mike Squires) writes:
|The UNIX file
|system also seems faster than PC-DOS.

UNIX does caching by default; this alone can make the apparent speed
of the filesystem greater.  Newer UNIX filesystems (BSD FFS and
Silicon Graphics' EFS are two examples) also try to put data where it
will be easier (read: faster) to get, with good success.

MS-DOS does (some) write-through caching which slows things
considerably on disk writes.  With the right drivers you can give it
"real" caching which I found tended to speed things up anywhere from
20% to well into the hundreds (I had 4Mb of cache, though).  I have
yet to see any extension to MS-DOS which tries to optimize data
placement, with the exception of disk reorganizers which tend to
optimize for fragmentation and not for organization within cylinders,
etc.

It's difficult to benchmark UNIX versus MS-DOS because most of the
benchmark programs are designed to test pure performance (eg how fast
you can write 20 megabytes or calculate pi to two million digits)
instead of testing average conditions, which is a much fairer and more
informative test.  Since the "average" UNIX load is usually ten to
twenty system processes and several users (each with two or more
processes), and the "average" MS-DOS load is one application, it's
pretty hard to come up with a comparison.

jim frost
madd@bu-it.bu.edu

alpope@token.Sun.COM (Alan Pope) (05/18/89)

In article <10571@cit-vax.Caltech.Edu>, tim@cit-vax.Caltech.Edu (Timothy L. Kay) writes:
> In article (Anthony Scian) writes:
> >Yes. A company that ignores history is doomed to repeat it.
> >Maybe they'll invent SNOBOL in five years. It is beyond comprehension
> >how anybody can develop an OS in assembly language this day and age.
> >Where have the OS architects at Microsoft been for the last two decades?
> 
> That's just it.  In my (uninformed) opinion, Microsoft doesn't have
> any *architects*.  According to the introduction to _Inside OS/2_,
> this guy Letwin (the author of the book and "architect" of OS/2) is
> somebody Bill Gates met at a hacker conference and hired immediately.
> I doubt that he has any formal OS experience.  It is all just a hack.


Gordon Letwin was formerly responsible for the HDOS operating system from
Heath.  HDOS was actually a real OS when compared to something like CP/M.
It supported real device drivers among other things.  He had left Heath to
join Microsoft before I joined Zenith Data Systems in April 1981 but
evidence of his presence was still felt an respected.  I believe that he
had prior experience in OS's before Heath but don't remember.  You forget
(or perhaps you weren't born then) that almost everybody involved in micros
in the 70's were ``hackers'' (in the original sense of the word).
						Alan L. Pope
						alpope@sun.com

alexande@drivax.UUCP (Mark Alexander) (05/19/89)

In article <10571@cit-vax.Caltech.Edu> tim@cit-vax.UUCP (Timothy L. Kay) writes:
>According to the introduction to _Inside OS/2_,
>this guy Letwin (the author of the book and "architect" of OS/2) is
>somebody Bill Gates met at a hacker conference and hired immediately.
>I doubt that he has any formal OS experience.

The introduction does mention that Letwin wrote an operating system
for Heath.  Does that count as "formal OS experience?"  I think so.
This was probably HDOS, which, according to a friend who used it
for a while, was a pretty nice OS, quite a bit more advanced than
CP/M.  But CP/M was already in much wider use elsewhere, so HDOS
didn't catch on.  Sort of the same situation OS/2 is in right now,
except that OS/2 has better chance of catching on, like it or not.

It's a bit strange for me to defend a competitor in the OS market,
but I don't think Letwin deserves the insults.
-- 
Mark Alexander	(amdahl!drivax!alexande)

las) (05/27/89)

In article <4666@drivax.UUCP> alexande@drivax.uucp.UUCP (Mark Alexander) writes:
>In article <10571@cit-vax.Caltech.Edu> tim@cit-vax.UUCP (Timothy L. Kay) writes:
>>According to the introduction to _Inside OS/2_,
>>this guy Letwin (the author of the book and "architect" of OS/2) is
>>somebody Bill Gates met at a hacker conference and hired immediately.
>>I doubt that he has any formal OS experience.

>The introduction does mention that Letwin wrote an operating system
>for Heath.  Does that count as "formal OS experience?"  I think so.
>This was probably HDOS, which, according to a friend who used it
>for a while, was a pretty nice OS, quite a bit more advanced than

It was HDOS and HDOS was not only nice, it was truly amazing.  Looking
down from (what I thougt were) the lofty heights of DecSystem-10, I
deigned to forgive CP/M its primitive nature owing to the small-and-
primitive 8080 environment.  Then I encountered HDOS and found myself
looking out from a far less imposing vantage.  I was amazed at what 
the small box could do.  Nice job, Gordon.

>CP/M.  But CP/M was already in much wider use elsewhere, so HDOS
>didn't catch on.  Sort of the same situation OS/2 is in right now,
>except that OS/2 has better chance of catching on, like it or not.

HDOS was tied to the Heath name and the Heath "engine" (forgotten
the name of that computer now).  it need not have been, but Heath
was in the business of selling hardware, not software, so their
(nice) offerings were Heath-specific - understandable but unfor-
tunate.  CP/M was more generally available and cheap - you could 
even get it and bring it up on your home-built.  This gave CP/M 
more access to the (very limited) available market.  When software 
products appeared for CP/M, it was further boosted to prominence.
Meanwhile HDOS, North Star DOS, and what-did-they-call-it DOS for 
the SOL "Terminal Computer" remained hobbyist (pure :-)), with 
relatively little or no commercial usage.

(I had a friend in Houston who tried to make a go of selling
commercial systems based on Heath and HDOS.  Didn't really go
anywhere.)

If only HDOS were MS-DOS' precursor rather than CP/M!

regards, Larry
-- 
Signed: Larry A. Shurr (cbema!las@att.ATT.COM or att!cbema!las)
Clever signature, Wonderful wit, Outdo the others, Be a big hit! - Burma Shave
(With apologies to the real thing.  The above represents my views only.)
(Please note my mailing address.  Mail sent directly to cbnews doesn't make it.)