[comp.unix.msdos] LONG Reply To: Unix Vs. DOS

rwhite@nusdecs.uucp (0257014-Robert White(140)) (11/22/90)

Note 1: Before I start I would like to say that I use both
environments regularly and while I am attempting to "get away from
ms-dos" on my system at home I don't consider that the right thing for
everybody to do.

Note 2:  This qualifies as a persnic in my mind, but I am compelled to
say it;  The OS in question is *NOT* "DOS";  It is MS-DOS or PC-DOS.
As I deal regularly with many people in different environments I find
myself telling the MS-Dos people not to call it "DOS" with some
regularity.  Why?  DOS/VSE, Apple DOS [sic], MS-DOS, DOS/VM, and the
hundreds of instructional materials which discuss Disk Operating
System principles in general that I and the students and the faculty
work with regularly.

Remember: Ignorant marketing people and salesmen made "DOS" out of
"MS/PC-DOS" the same way CBS News made "hacker" out of "criminal";
without thinking.

Note 3: This is long but there is a point;  feel free to skim what you
know already (like I could stop you ;-)

(anyway... B-)

In article <551@caslon.cs.arizona.edu> musa@cs.arizona.edu (Musa J. Jafar) writes:
>Unix is a real Hierarchy while DOS is not
>This is what I understood from my readings.
>But DOS looks like a hierarchy.

Not actually correct.  Both systems store information on their disk
systems in "real(tm)" higherarchys (ies ?).  The principle differences
(in the simplist menaingful terms) are as follows:

ALL UNIX System facilities (other than "pure" function calls) occur
someware in the higherarchy so that "everything is a file."  In MS-DOS
only sequences of stored bytes (e.g. regular, normal files) occur in
the disk higherarchy(s).  That is there are no "device" files in
MS-DOS.

MS-DOS has one complete higherarchy structure for each separate
drive/disk/partition while UNIX Systems integrate theirs into one
single higharchy by "mounting" sub-higherarchy(s) as directories.  The
unix system is simpeler to manage in some was but requires extra work.
The MS-DOS command "join" can be used to accomplish about the same
thing.

In UNIX Systems the information about any given file is kept in a
structure called an "i-node" (for information node) and all the inodes
are collected in a common pool (yes I know abut FFS etc. but it is
still effectively the same).  If you exhaust the pool you can add no
more files to a particular UNIX Disk/Partition until you free up an
i-node.  Directories are files which contain a list of name-and-number
pairs where the name is what you call it and the number is the number
of the inode that has the information th system needs to know about
the file (like who owns it, how big it is, and where on the disk to
find it).  There is a special "file" called the "free list" which
contains all the disk space not used by any other file.  By having the
same number next to several different names you create "multiple links
to the same file", which means simply that you have several names for
the exact same data, instead of several copies of some original.

In MS-DOS the information for each file is kept in the individual
directory entry for the file itself and a chart of the disk space
called the "file allocation table" or FAT is used to locate exactly
where on the disk the data for the file actually is.  Because the
directory has all the vital information about the file the operating
system must use the directory entry when operating on the file.  This
does indeed mean that to get the information about a file the
operating system must maintain additional information for each
directory involved.  Also there is no way to do "multiple links" under
MS-DOS.  All this means that complex higherarchies under MS-DOS are
less efficent than those under UNIX.


>questions:
>Why DOS is not real?
>What is the advantage of UNIX over DOS in a single user Env.

	There are (currently) limits to many facilities of MS-DOS.
On an IBM-PC compatable machine MS-DOS is limited tu using what is
commonly refered to as "conventional" memory.  The "convetion" this
memory adheres to is in reality the hardware layout of the original
PCs.  When you by a machine like a 286 or 386 PC Compatable, with more
memory than those original mahines could handle, the assumptions made
by MS-DOS prevent the use of that memory.  Effectively causing you to
"waste" the money spent on any memory above the 1 Megabyte level.
Programs and programming techniques now becomming standard in the
MS-DOS environment allow a programmer to, by exerting a little extra
effort (and using up a little extra CPU time), use this "wasted"
memory.  Each of these techniques takes up some of the conventional
memory of the system to both setup and use.  Use is not rally a
problem because the programs that use the techinques get more memory
avaliable for data than those that do not.  The problem is that the
convetional memory used in the setup is no longer available to the
programs that don't use the thechniques, and these programs need this
memory badly.  *MOST* programs do not use these techniques, so when
you setup your system for those that do you penalize much/most of
whatever else you do in your system.

	Every "normal" device you add to an MS-DOS system requires a
MS-DOS compatible dirver.  These drivers must confor the the MS-DOS
restrictions so they also use up "conventional memory" because they
become effectively part of the operating system.  Similarly, when you
want to add convient features you run programs that alter the the
environment semi-perminantly, and they take up this memory also.
These are the "device drivers" and "terminate and stay resident
programs" you have been reading about.  Since you are restricted to
this little box (1 Meg) and you fill it with drivers and TSR programs
you begin to trade functionality for features.  Usually quite an
annoying problem.  One system with a mouse, extra-good display,
doccument scanner, fax board, and a special printer, can end up so
well equipt that it can't be used;  but exactly this system is what a
professional doccument prepairer might need.

	There are symetric design flaws in the size restrictions on
things like hard disks used by MS-DOS.

	Whenever an MS-DOS programmer runs up against a limitation
like this he is free to ignore what he chooses and try to overcome it
(arn't we all) but the operating system has not way to protect itself
from this.  One program can work fine all by itself but blow your
system and your data all to hell when use in combination with another.
Not because of real concurrent access issues but because one or both
of the programs choose to break some rule.  This is a real danger,
even with sanctioned and popular programs.  (if you dont beleive me
perform the following:  use fastopen [an MS-DOS 3.3 utility] on your
dirve C, Run WordPerfect 5.0 [or even 5.1 I think] on your drive C,
and retrieve-edit-save a doccument on your drive C;  then go look for
a backup of your data because that doccument is history.  All of these
activities are recommended by the approprate authorities [separately]
but together they are a disaster)

	The problem is not one of hardware but of the assumptions that
were originally used to design MS-DOS.  The original small machines
and target market (marketing people again! ;-) led to these
assumptions, the use of which are still necessary to maintain the
"compatibility" everyone (in the MS-DOS market) demands.

	UNIX Systems provide loadable device drivers just like MS-DOS
(in concept) but they have nothing equalivant to a "terminate and stay
resident process" because there is no need.  The multi-processing
nature of the system provides the means for a completely separate
program to do its thing in its own memory space.  The "traditional"
memory limit on a single program is 8 Meg *each*.  UNIX Systems juggle
memory so that rather than "having one box (of 1 Meg in MS-DOS) to put
everything in" a "separate box for each thing" mentality is used.
This way no one program or facility eats into the space of another.
There is of course a real-user-time cost for this, but everything in
the system is (should be) written with this in mind.

	From a normal use standpoint UNIX Systems are superior
because, even though a given program may be inferrior no mater what
the operating system, UNIX itself takes more effective steps to make
shure that a minor problem dosn't turn into a system-smashing event.
It is more dificult for a person to be deliberately distructive to
data (they dont own) and to the systems as a whole where any "security
mesures" taken within MS-DOS programs are inherently insufficient
because the operating system itself is not-at-all secure.  The system
tends to be "virus" resistant. (Tho in some ways more atractive to the
trojan-horse and worm type destructive programmers because of an
increased challenge and known connectivity.)

THE *BIG* POINT(S):

	An OS is *only* a program that is intended to give a nice,
convient, and consistantly predictable; it is not a religous persuit
so a "whatever works for whatever user" attitude is in order.  Don't
get talked out of what you need, and when what you want and what you
need conflict go with the latter.  (But pleeeeeessssse let me do that
too; don't try to convince me that icons would be easier and better
and the "way of the future", and try to make me use a picture of a
trash can, or MOTIF. ;-)

	Every OS is limited by a basic set of assumptions.  Since UNIX
is evolving down from larger machines it is not being (currently) held
back by a smaller historical environment the way MS-DOS is.  UNIX's
Assumptions have also had a little academic study instead of market
analysis (I don't like too much marketing, as you can tell) so its
founding assumptions are a little more open-ended than something like
OS/2 (half and operating system and being devided further by the
moment :-/); and have a mature and involved user base.

	Any OS that can not protect itself, and therefore its users
and their data (I am talking to you here Microsoft Corp. and Apple
Computer inc.) from "normal" programmers and "normal" operations
should be fixed by the developer or avoided by the user whenever
possible.

> other than you have more options under Unix (but you pay for them)

	The fact that the *same* program (say microsoft word or
wordperfect) are available in both environments, you will invariably
pay more for the UNIX System software.  This is based on greed and
"market analysis" which says that individual users are not going to be
buying UNIX Systems and so therefore a corprate-only price with no
incentives is generally offered.  This will change in time, but I
expect the prices to never be comprable.

	UNIX System software (both the OS itself and the applications)
is currently "afflicted" by what I call n-user-itis;  where the
software is normally cripiled and then more is charged for the
un-crippled versions.  Crippled software is annoying to me on a gut
level.  Granted that I am only one user (though I often have five or six
windows open so a 2-user package sometimes *does* get in my way and I
am "1 user") but I still resent it.  Especially when the base price
for a "cripled" word processor is nearlly a thousand dollars.

	A good thing about UNIX Systems is that it is nearly
impossible to copy protect software on those systems.  The programs
simply do not have the access to the hardware to be overly cute.  I am
intrinsicly against software piracy because I expect this industry to
support me so I would be a hypocrite (sp?) to want the one thing and
do the other.  I am simply tired of paying for the copy protection
schemes in real cash and time spent dealing with the anoyances.  The
idea of having my computer re-write part of an "original" disk before
I use it makes my flesh crawl; because of personal experience.
(perhaps the people who can afford the pricy systems as mentioned will
let the childish practice lapse, besides there is a good pool of
excelent free-and-share-ware out there.)

	Which brings up the "good pool of execelent
free-and-share-ware."  With an OS being used at every level of
corprate, education, and personal involvment the quality of the
contributed art is higher than that of MS-DOS.  The afore-mentioned
programs make use of the facility mor often than they take advantage
so they contin more bang-for-the-buck and less neat-trick value for
any given body of code.

	UNIX will have fewer "real-time semi-animated
gee-look-how-prety-the-slow-picture-is I-just-learned-BASIC games."
You may consider this a plus or a minus as you see fit ;-)


Do I have more to say?  Yea.  But I fyou got this far I'll just give
you a break and stop now. B-)


*******************************************************************
Robert White           |   Not some church, and not the state,
Network Administrator  |      Not some dark capricious fate.
National University    |   Who you are, and when you lose,
crash!nusdecs!rwhite   |      Comes only from the things you chose.
(619) 563-7140 (voice) |                             -- me.
*******************************************************************