[comp.sys.ibm.pc] OS/2 and programming in the future

dave@micropen (David F. Carlson) (09/22/87)

Something that this discussion of OS/2 has had and lost is the reason for
operating systems in general and on microcomputers specifically.  In general,
operating systems provide is a means of coordinating activity and sharing 
computer resources.  Thus, a program with two open disk files can write to each
of them without fear of corruption.  On a microcomputer (in particular a single
tasking machine like PC/MS-DOS machines), coordination is very simple:  blocking
IO means that a request is satisfied on a first-come first-complete basis.  No
problem for the single task.  Of course, having access to the machine (due to
lack of resource protection in the lower members of the Intel 80x86 family) let
programmers *break* the operating system's model of the machine by allowing 
clever (and now necessary) background communications programs and TSR's avoid
the OS layer altogether.   In fact, the paucity and pigginess of the blocking IO
BIOS/MS-DOS IO calls virtually requires programmers to circumvent the OS to get
at hardware in applications programs.  Those of us from the late, great CP/M 
days (and how many DOS users have written their own BIOS?) loved these little
hooks but most of us realized that the ONLY way to write portable code was to
constrain ourselves to the OS call interface.  That is because in CP/M we all
had *very* different hardware.  The sole reason that MS-DOS can be cleanly 
circumvented for screen programming, TSR's, etc. is the IBM-PC *standard*
hardware.  With the advent of PS/2s and VGA (with MDA, CGA and EGA) and various
other new and yet unrealeased hardware mods, the ability of MS-DOS programmers
to cleanly circumvent the OS will diminish as surely as the proliferation of
hardware in the CP/M days made disk formats a nightmare, (where do you think
XMODEM came from?)   

So what is a company in the 80's to do?  Well, Microsoft has OS/2 that will
force programmers to use *their* standard interface in a way that MS-DOS with
the Intel real-mode lack of protection could not enforce.  This is a good thing.
Then hardware dependency can be handled once at the OS/device driver level and
not by every application.  Development costs come down due to the well-defined
standard nature of the OS and the lack of necessity to hire obscurist assembler 
hackers to "get around" the OS.  So now we have a well-defined interface--or
do we?  Problems we have at this point are OS/2 has three highly incompatible
parts:  OS/2 "native", "DOS-in-a-box", and real MS-DOS cross development.  Here
is the real rub for the company in the 80's:  to reach a PC/AT or PS/2 market
segment one must code the OS interface *three* different ways!  The DOS 
*version* must still do nasty things to figure out what hardware is where and
how to exploit it.  The "DOS-in-a-box" *version* must carefully use resources 
through the more or less "new" call standard to make sure the real-mode under 
OS/2 doesn't crash.  And the OS/2 *version* must use its new call interface.
So, computer manager of the 80's what do you do?  Just to support one market
segment (not to mention the Mac's et al. that won't run these Microsoft oriented
applications) up to 3 very different versions of the software must be written.
(Most smart managers would bring up a "DOS-in-a-box" version to hit the OS/2
market and then let the DOS hackers strip out the slow, standard MS-DOS calls
and replace them with the "real programmer's" versions, thus not losing MS-DOS
machine efficiency but still providing for sellable version for OS/2 users.
This "smart" view assumes DOS limitations like 640K aren't objectionable as it
is in almost any "serious" applications program.)

My personal jist on this I have already written here:  there is a standard
machine-independent set of operating system calls from which portable code
that will run on many machines by many manufacturers: ATT's SVVD and SVID.
The government has gone as far as to tell proprietary OS vendors to conform
or be passed over.  (Not that having the government legislate computer standards
makes me feel good:  its just that they recognize that incompatible systems
are not in their best interest as software costs far outstrip hardware costs
in the long term.)

Now, many people say "uh oh UNIX crusader!"  Well, not really.  "Portable-code
for-maximum-market-coverage-with-fewer-incompatible-versions-for-enhanced
software-maintainability-and-lifetime" crusader I am.

Tenets to my beliefs are:
1).  Portable coding style and discipline make inherently better code from a
maintainability point of view.  There is no doubt in my mind that a hard
standard call interface eases maintainance.  In today's market, maintaining 
software over the product lifetime will cost 2X to 10X of the developement 
cost.  Having 3 incompatible versions will make those costs triple.
With the margins in the market place and the cost of maintainance, this is the
difference between profit and loss for many software products.

2).	Text oriented, serial terminal connected, multiuser machines will be
the most cost effective way to connect the "majority" of users for the 
foreseeable future.  That is, I can put a user up on a terminal (say Wyse 50)
for $340.  A PC/XT clone with maintainance, maybe a hard disk or a network 
node, cannot be done for much less that $1000.  That is 3:1 and my bet is that 
ratio will be constant over time.  

The average user doing 1-2-3 or data entry or text entry
*does not need* 640x480x8: just vanilla 80x24.  It is simply not cost effective
for my secretary to have a PS/2 50 ($5000) to do our mailing list.  It never
will be.  (And now I've ruined his hopes for Christmas in September!)  For the
people that *need* graphics (like us in the CAD/CAM game) *need* a workstation
and *need* access to a bigger machine because numeric analysis on micros is for
the birds.  For the Wall Street executives that *need* WSYIWYG and that ilk:
buy a macintosh on appletalk with a shared departmental Laserwriter.  This
solution is the easiest to support and the most cost effective.  (Of course,
I often argue that any executive worth his salt has an assistant to do layout
on anything worth worrying about.  The assistant can have the workstation of
his/her choice because they *need* it.)

Although my assumptions are not terribly assuming, many honestly disagree.
But from an engineering economics point of view, it is not feasible to support
multiple operating environments.  In the old days people could just write for
IBM mainframes, but even mainframe vendors are noticing that mainframe 
purchasing is falling off and that "just-IBM" isn't good enough to make the 
grade any more.  To remain competitive in today's market portable, maintainable
code is necessary.  Who's standard doesn't matter--just that the OS interface
is well-defined and available on as many machines (and as many users for 
multiuser machines) as possible.  OS/2 will *only* be available on PC/ATish
and PS/2 machines.  Not IBM mainframes, not VAXes, not SUNstations or the
rest of the industry.  My compelling problem with OS/2 is that it locks the
software to the Intel architecture for the reasons I stated here two weeks
ago:  Microsoft has intimately tied OS/2s programmer's model to the Intel
80286 architecture and, in my opinion, removed the possibility of an 
industry wide standard OS interface convention.


-- 
David F. Carlson, Micropen, Inc.
...!{seismo}!rochester!ur-valhalla!micropen!dave

"The faster I go, the behinder I get." --Lewis Carroll