[comp.arch] Free Software Foundation

mwm@eris.UUCP (01/01/70)

In article <6365@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
<In article <738@hplabsz.HPL.HP.COM> kempf@hplabsz.HPL.HP.COM (Jim Kempf) writes:
<>I, too, would like to see FSF's software on the PC/AT and PC/XT, but
<>I think its unreasonable to expect FSF to do it. 
<
<The hard part is the OS kernel, and for that you can use MINIX
<(subscribe to comp.os.minix).  User-mode utilities should not be
<too hard to port to MINIX.

Minix is v7 - (things you didn't know about, and don't want even if
you did), the GNU kernel should be 4.3BSD + (things) - (security
features).

So for starters, you can expect the same set of things to break as you
do when doing ports across Unix families: doing anything nontrivial
with ttys, shared memory, networking, file name lengths, etc. 

To make things worse, the GNU utilities are mostly being written from
scratch assuming they'll have 4BSD and a large machine underneath
them. So expect daemons/utilities to want to talk to syslogd. Expect
programs that will try to create file names longer than 14 characters.
Expect programs that assume that memory is cheap.

"Not to hard" being a relative term (i.e. - iron is "not to hard"
compared to diamond, but not when compared to mud :-), I can't say
that Gwyn was wrong. I just want to dispell any thoughts that this
would be an easy project to tackle.

Better to run raw Minix, and port the things you want. If you get the
editor to work, let me know :-).

	<mike

--
Here's a song about absolutely nothing.			Mike Meyer
It's not about me, not about anyone else,		mwm@berkeley.edu
Not about love, not about being young.			ucbvax!mwm
Not about anything else, either.			mwm@ucbjade.BITNET

fdr@apollo.uucp (Franklin Reynolds) (01/01/70)

In article <486@ast.cs.vu.nl> ast@cs.vu.nl () writes:
>
>640K?  Why would he need 640K?  MINIX runs quite well on a 512K AT
>with one 1.2M floppy disk.  Who needs 640K?
>
>Andy Tanenbaum (ast@cs.vu.nl)

I assume you left off the ":)" by accident. If GNU is supposed to be
BSD 4.3 compatable it is a significantly more ambitious effort than
MINIX. MINIX is a decent, small system for teaching. GNU is supposed
to be suitable for research or commercial development.

I have been looking for an inexpensive, Unix-based system for my
personal use. MINIX isn't powerful enough to be useful to me, even 
for hobby hacking. Hopefully GNU will be.

Franklin Reynolds 
fdr@apollo.uucp

dhesi@bsu-cs.UUCP (01/01/70)

[lot of heated arguments pro and con Stallman's software plans]

One thing that puzzles me about the Gnu versus Minix debate is why
anybody should have any complaints at all to begin with.  If you want
something that runs on small systems, Minix is here now.  If you're
more ambitious, wait for the Gnu operating system.  And if you want
portability for your software that's possible too if you basically
stick to version 7 system calls and define a few macros for the few
things that might vary between systems.

Similarly, if you find Gnu Emacs too bulky for your taste, there are
alternatives.  Both Jove and Microemacs are nice editors and they are
free too.

It's great to have choices.  Stallman is adding to our choices, not
detracting from them.

So what am I missing?
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi

kempf@hplabsz.HPL.HP.COM (Jim Kempf) (08/27/87)

In article <83@splut.UUCP>, jay@splut.UUCP (Jay Maynard) writes:
> 
> I'll support the Free Software Foundation when they give up their processor
> bigotry and decide to support the machine architecture that I use (PC/AT).
> Until then, why should I waste my money?
> 
I don't think it's a matter of bigotry. FSF has exactly three employees-
Stallman, one full time programmer and a secretary. They are looking to
hire a part time programmer this fall. With that kind of staff, it's
astounding that Stallman gets as much done as he does, and that it is
of any quality at all. Additionally, Stallman prefers using machines
with larger address spaces because they're easier to program. This
is understandable, since it allows him to concentrate his time on
things other than trying to reduce the size of things to fit in 640K.

I, too, would like to see FSF's software on the PC/AT and PC/XT, but
I think its unreasonable to expect FSF to do it. 

		Jim Kempf	kempf@hplabs.hp.com

Usual Disclaimer

tower@bu-cs.BU.EDU (Leonard H. Tower Jr.) (08/28/87)

I have directed followups to comp.os.misc.

I'm now a GNU volunteer, and did spent a year being paid to work on
GNU (the salary came from a private foundation [not FSF]).  I've been
a director (aka corporate officer) of the Free Software Foundation
since it's beginning.

In article <738@hplabsz.HPL.HP.COM> kempf@hplabsz.HPL.HP.COM (Jim Kempf) 
writes:
 > FSF has exactly three employees-
 > Stallman, one full time programmer and a secretary.  

Lot's of mis-information here.  RMS has NOT been and is NOT currently
an employee of the Free Software Foundation.  He earns his livelihood
as a consultant.  FSF does NOT employ a secretary.  FSF is employing a
full time programmer and a part time shipping clerk.

 >  With that kind of staff, it's
 > astounding that Stallman gets as much done as he does, and that it is
 > of any quality at all.  

RMS is quite productive.  He has also had help from many hackers and
quality programmers, who have volunteered their efforts.

 > Additionally, Stallman prefers using machines
 > with larger address spaces because they're easier to program.  This
 > is understandable, since it allows him to concentrate his time on
 > things other than trying to reduce the size of things to fit in 640K.

That's an approximation of RMS' feeling on the matter.  I advise
interested people to read the GNU Manifesto.  It was published in the
March 1985 issue of Dr. Dobb's Journal.  It and answers to questions
about the project GNU can also be obtained from:

  <gnu@prep.ai.mit.edu> aka <..!ucbvax!prep.ai.mit.edu!gnu>

enjoy -len
-- 
Len Tower, Distributed Systems Group, Boston University,
     111 Cummington Street, Boston, MA  02215, USA +1 (617) 353-2780
Home: 36 Porter Street, Somerville, MA  02143, USA +1 (617) 623-7739
UUCP: {}!harvard!bu-cs!tower		INTERNET:   tower@bu-cs.bu.edu

gwyn@brl-smoke.ARPA (Doug Gwyn ) (08/30/87)

In article <738@hplabsz.HPL.HP.COM> kempf@hplabsz.HPL.HP.COM (Jim Kempf) writes:
>I, too, would like to see FSF's software on the PC/AT and PC/XT, but
>I think its unreasonable to expect FSF to do it. 

The hard part is the OS kernel, and for that you can use MINIX
(subscribe to comp.os.minix).  User-mode utilities should not be
too hard to port to MINIX.

henry@utzoo.UUCP (Henry Spencer) (08/30/87)

> ... Additionally, Stallman prefers using machines
> with larger address spaces because they're easier to program. This
> is understandable, since it allows him to concentrate his time on
> things other than trying to reduce the size of things to fit in 640K.

A less charitable view of this is that Stallman couldn't write a small
program to save his life.  Unfortunately, this is a common maladay nowadays.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

adamm@encore.UUCP (Adam S. Moskowitz) (09/01/87)

In article <8520@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) says:
>> ... Additionally, Stallman prefers using machines
>> with larger address spaces because they're easier to program. This
>> is understandable, since it allows him to concentrate his time on
>> things other than trying to reduce the size of things to fit in 640K.
> 
> A less charitable view of this is that Stallman couldn't write a small
> program to save his life.  Unfortunately, this is a common maladay nowadays.

It's not only less charitable, it's dumb.  Having met Richard and dealt with
him technically (although not a lot), I'd bet he *could* write a small
program.  I know I can.  [insert "war story" about growing up on an 8K
machine here]  The point being this: trying to squeeze complex programs into
small spaces is a waste of effort.  It often results in programs that either
a) have some (often unusable) limits (file name sizes, # of files, &c.), or
b) are hard to read/maintain, because too much thought had to go into trying
to fit the damn thing into a small space, and not enough effort was left to
making the program functional/maintainable.  I'm not saying you can't write
a program that does everything including wash the dishes, has no limits, is
easy to maintain, and fits in 640K (or whatever), but why add the size limit?
Why kill yourself to deal with what many people feel is a bad hardware design?
-- 
Adam S. Moskowitz	...!{decvax,ihnp4,linus,necntc,talcott}!encore!adamm

     "Gonna die with a smile if it kills me!"  --  Jon Gailerfut

richard@islenet.UUCP (09/01/87)

In article <8520@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
> > ... Additionally, Stallman prefers using machines
> > with larger address spaces because they're easier to program. This
> > is understandable, since it allows him to concentrate his time on
> > things other than trying to reduce the size of things to fit in 640K.
> 
> A less charitable view of this is that Stallman couldn't write a small
> program to save his life.  Unfortunately, this is a common maladay nowadays.
>

Leave it to someone who's been using small, out-dated equipment for
years now to be so publicly unkind.

How seemingly intelligent people can find the need to belittle
others publicly, without provocation, is a mystery.

But to attack someone who writes software and distributes it free of
charge, to attack them because they don't cater to your particular
obsolete machinery is amazingly selfish and stupid.



-- 
Richard Foulk		...{dual,vortex,ihnp4}!islenet!richard
Honolulu, Hawaii

ast@cs.vu.nl (Andy Tanenbaum) (09/02/87)

In article <738@hplabsz.HPL.HP.COM> kempf@hplabsz.HPL.HP.COM (Jim Kempf) writes:
> Additionally, Stallman prefers using machines
>with larger address spaces because they're easier to program. This
>is understandable, since it allows him to concentrate his time on
>things other than trying to reduce the size of things to fit in 640K.

640K?  Why would he need 640K?  MINIX runs quite well on a 512K AT
with one 1.2M floppy disk.  Who needs 640K?

Andy Tanenbaum (ast@cs.vu.nl)

jim@cs.strath.ac.uk (Jim Reid) (09/02/87)

In article <1883@encore.UUCP> adamm@encore.UUCP (Adam S. Moskowitz) writes:
>.................................... trying to squeeze complex programs into
>small spaces is a waste of effort.  It often results in programs that either
>a) have some (often unusable) limits (file name sizes, # of files, &c.), or
>b) are hard to read/maintain, because too much thought had to go into trying
>to fit the damn thing into a small space, and not enough effort was left to
>making the program functional/maintainable.  I'm not saying you can't write
>a program that does everything including wash the dishes, has no limits, is
>easy to maintain, and fits in 640K (or whatever), but why add the size limit?

I agree with the gist of what you say, essentially that programmers
need not have to worry too much about the underlying hardware. However,
we should not forget that sometimes these "artificial" hardware
constraints can be a benefit. Remember that the UNIX kernel in the days
of V7 (and before) fitted into 64K because that was as big a program
that a PDP could run (notwithstanding sep I/D or fancy overlays or
extended addressing). To quote Ritchie and Thompson's original CACM paper:
"the size constraint has encouraged not only economy, but also a certain
elegance of design". Where would UNIX be today without that minimalism?

Another case in point would be the evolution of the UNIX spell command
and how a 30,000 word dictionary was squeezed into 64 Kbytes (the PDP
limitations again) for an efficient and quick spelling checker.

Now we have editors that easily guzzle a megabyte (or more) of memory
and take forever to start up. So much for progress. A program's quality
or usefulness is not necessarily related to its size.

		Jim

lawitzke@eecae.UUCP (09/02/87)

> Xref: eecae comp.arch:1452 comp.unix.wizards:3138 comp.os.misc:125
> 
> Minix is v7 - (things you didn't know about, and don't want even if
> you did), the GNU kernel should be 4.3BSD + (things) - (security
> features).
> 
The GNU kermel should be 4.3BSD +(things) + (security features)
Since GNU will be distributed in a source code form for next to
nothing cost, it will be very attractive to small schools or
companies that want to run UNIX but their gurus want source code.
In anything but an environment where you have just a handful of people
working on the system who know what they are doing, you have to
have the security features.

What security features don't you want? Separate userids and passwords?
file protection modes? disk quotas? checking the name of a system
uucping in? .......


-- 
John H. Lawitzke                 UUCP: ...ihnp4!msudoc!eecae!lawitzke
Division of Engineering Research ARPA: lawitzke@eecae.ee.msu.edu  (35.8.8.151)
Michigan State University        Office: (517) 355-3769
E. Lansing, MI, 48824

nather@ut-sally.UUCP (09/02/87)

In article <3470@islenet.UUCP>, richard@islenet.UUCP (Richard Foulk) writes:
> 
> But to attack someone who writes software and distributes it free of
> charge, to attack them because they don't cater to your particular
> obsolete machinery is amazingly selfish and stupid.
> 

Henry Spencer is anything but selfish and stupid, as you would know if you
had spent any time on the net.  He's entitled to his opinion.  You'll be
entitled to call him names when you've contributed as much as he has.

-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU

mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) (09/03/87)

[Followups directed to comp.os.misc]

In article <2117@eecae.UUCP> lawitzke@eecae.UUCP (John Lawitzke) writes:
<> Xref: eecae comp.arch:1452 comp.unix.wizards:3138 comp.os.misc:125
<> 
<> Minix is v7 - (things you didn't know about, and don't want even if
<> you did), the GNU kernel should be 4.3BSD + (things) - (security
<> features).
<> 
<The GNU kermel should be 4.3BSD +(things) + (security features)

Last time I checked with RMS, the line was "I certainly don't want any
hairy security features" in applications. There was also some
indication that the [gu]id manipulation code would be different than
stock 4.3. Knowing RMS's stand on such things, I would be greatly
surprised if "different" amounted "+(security features)."

<In anything but an environment where you have just a handful of people
<working on the system who know what they are doing, you have to
<have the security features.

Ask RMS. I suspect he would disagree. I will agree that there are many
environments where you need security. On the other thand, there are
many outside of the small set you describe that don't really need
security.

<What security features don't you want? Separate userids and passwords?
<file protection modes? disk quotas? checking the name of a system
<uucping in? .......

Did I say I didn't want security features? Doesn't look like it. Now I
did say there were some things in the set {v7 syscalls} - {Minix
syscalls} that I didn't want. For instance, Tannenbaum was applauded
at his talk in the south bay (SF bay) when he mentioned that he didn't
provide ptrace().

	<mike
--
How many times do you have to fall			Mike Meyer
While people stand there gawking?			mwm@berkeley.edu
How many times do you have to fall			ucbvax!mwm
Before you end up walking?				mwm@ucbjade.BITNET

molly@killer.UUCP (09/03/87)

In article <3470@islenet.UUCP>, richard@islenet.UUCP (Richard Foulk) writes:
> In article <8520@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
> > > ... Additionally, Stallman prefers using machines
> > > with larger address spaces because they're easier to program. This
> > > is understandable, since it allows him to concentrate his time on
> > > things other than trying to reduce the size of things to fit in 640K.
> > 
> > A less charitable view of this is that Stallman couldn't write a small
> > program to save his life.  Unfortunately, this is a common maladay nowadays.
> 
> But to attack someone who writes software and distributes it free of
> charge, to attack them because they don't cater to your particular
> obsolete machinery is amazingly selfish and stupid.

I run on a 2MB 68000 and a 8MB 68020 machine, which to me has cheap memory.
Now, I have compress 4.0 which uses about 400KB of memory.  With 8 users on
the 68000, suddenly loading 400K is a real shock, the 68020 just sort of
sighs and goes on.

Which machine do you think I worry more about keeping happy?  A 68000 that
is `outdated', or the 68020?  The answer is so simple only totally stupid
jerks (like the last bozo ;-) can't see it.  Take care of the little things,
like how much memory you use, and the big things take care of themselves.

Any program I write for the 68000 runs better on the '020 without needing
to recompile.  Just crank up ftp and sending the puppy by way of the ether
bunny.

Goddess knows what he'd think of my poor 768K box at home ...

Molly
-- 
       Molly Fredericks       UUCP: { any place real }!ihnp4!killer!molly
    Disclaimer:  Neither me, nor my cat, had anything to do with any of this
  "I love giving my cat a bath, except for all those hairs I get on my tongue"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

mitch@stride1.UUCP (Thomas P. Mitchell) (09/04/87)

In article <8520@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>> ... Additionally, Stallman prefers
>> working with larger address spaces because..
>
>A less charitable view of this is that Stallman couldn't write a small
>program to save his life.  Unfortunately, this is a common maladay nowadays.

A charitable view of this is that Stallman shouldn't.  The large
number of recent TeX implementations faster and smaller (code
size) than the original take nothing away from the design of D.
Knuth. Large to small transitions will be in response to a useful
design cleanly defined.

Here most of us sit with sh, csh, sed, awk, lex, yac, 'C',
assembler etc. yet some of us forget why all these tools were
collected into what I think of as Unix.  Recall that many 'C'
programs were first designed in Shell, sed, awk, etc. Later
rewritten in 'C' perhaps linked to libraries optimized in
assembler.  Those that have survived were useful and also
cleanly defined.

So .. Go for it RMS. and thanks for GNU Emacs. 
And thanks to others for all the different architectures which
keep us from all being little blue sm*fs.
Thomas P. Mitchell (mitch@stride1.Stride.COM)
Phone:	(702) 322-6868 TWX:	910-395-6073
MicroSage Computer Systems Inc. a Division of Stride Micro.
Opinions expressed are probably mine. 

peter@sugar.UUCP (Peter da Silva) (09/05/87)

> Minix is v7 - (things you didn't know about, and don't want even if
> you did), the GNU kernel should be 4.3BSD + (things) - (security
> features).

setenv SARCASM "on"

You mean like being able to plug a terminal in and run multiuser?

setenv SARCASM "off"
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                 'U`  <-- Public domain wolf.

pf@diab.UUCP (Per Fogelstrom) (09/07/87)

In article <677@stracs.cs.strath.ac.uk> jim@cs.strath.ac.uk writes:
>I agree with the gist of what you say, essentially that programmers
>need not have to worry too much about the underlying hardware. However,
>we should not forget that sometimes these "artificial" hardware
>constraints can be a benefit. Remember that the UNIX kernel in the days
>of V7 (and before) fitted into 64K because that was as big a program
>that a PDP could run (notwithstanding sep I/D or fancy overlays or
>extended addressing). To quote Ritchie and Thompson's original CACM paper:
>"the size constraint has encouraged not only economy, but also a certain
>elegance of design". Where would UNIX be today without that minimalism?

Unix today is not what it was many years ago. Nowdays it has viritual memory
, networking and much much more built in.
 
>Now we have editors that easily guzzle a megabyte (or more) of memory
>and take forever to start up. So much for progress. A program's quality
>or usefulness is not necessarily related to its size.

I belive You refere to "emacs" type editors. The main reason for the slow 
stratup is not the size of the program, rather than it is reading a huge
ammount of files, containing macros, key bindings, etc. A small program
would not do that faster i belive. And using up a megabyte ? Not if You
only use the simplest functions. The reason for being big is the ammount
of more or less usefull functions and commands built in.

By the way. One of the more important fetures with Unix is the ability to
link programs together with pipes. In that way, functions could be put
in to "small" programs like sed, cat, sort, etc and called up together
in a shell script to form some complex functions.

franka@mmintl.UUCP (Frank Adams) (09/08/87)

In article <1883@encore.UUCP> adamm@encore.UUCP (Adam S. Moskowitz) writes:
>Why kill yourself to deal with what many people feel is a bad hardware design?

Well, I can tell you why *we* do it; but that particular reason isn't
applicable to the Free Software Foundation.

On the other hand, one can rephrase the question as: why kill yourself to
produce code that lots of people can use right now?
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

jdg@elmgate.UUCP (Jeff Gortatowsky) (09/08/87)

In article <8907@ut-sally.UUCP> nather@ut-sally.UUCP (Ed Nather) writes:
>In article <3470@islenet.UUCP>, richard@islenet.UUCP (Richard Foulk) writes:
>> 
>> But to attack someone who writes software and distributes it free of
>> charge, to attack them because they don't cater to your particular
>> obsolete machinery is amazingly selfish and stupid.
>> 
>
>Henry Spencer is anything but selfish and stupid, as you would know if you
>had spent any time on the net.  He's entitled to his opinion.  You'll be
>entitled to call him names when you've contributed as much as he has.
>
>-- 
>Ed Nather
>Astronomy Dept, U of Texas @ Austin
>{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
>nather@astro.AS.UTEXAS.EDU


Ed, pulllleeeezzzzeee!  Henry normally doesn't make silly rash comments
like the one he made.  
It's quite understandable that Rich might feel RMS is going in the
right direction and to take exception ((trap 0xffffff I believe, "Nonsense 
Instruction Trap".)) to Henry's comments.  Coming from a man who is
normally the voice of reason, I too found Mr Spencer's comments off base
and childish.  I choose to encourage people like RMS.  Hopefully, someday
we will ALL own the type of machines RMS is writing for.

Which brings me to my next comp.arch question.
*WHAT ARE THOSE MACHINES GOING TO BE??*.
Barring RISC vs. CISC debates.  What is *THE* next great advance in 
computer technology.  Better hardware?  Not 64 bits.(?) I meant more 
in design.  What do you guys at MIPS, LISP Machines, Motorola, Intel, 
NEC, Sun (yes they are now in the CPU game), etc.... see as the most 
promising area of exploration?


Hopefully this is a more appropriate line of discussion..... 8^)

Jeff
-- 
Jeff Gortatowsky       {seismo,allegra}!rochester!kodak!elmgate!jdg
Eastman Kodak Company  
These comments are mine alone and not Eastman Kodak's. How's that for a
simple and complete disclaimer? 

gew@dnlunx.UUCP (Weijers G.A.H.) (09/08/87)

In article <2117@eecae.UUCP>, lawitzke@eecae.UUCP (John Lawitzke) writes:
> > 
> > Minix is v7 - (things you didn't know about, and don't want even if
> > you did), the GNU kernel should be 4.3BSD + (things) - (security
> > features).
> > 
> The GNU kermel should be 4.3BSD +(things) + (security features)
> Since GNU will be distributed in a source code form for next to
> nothing cost, it will be very attractive to small schools or
> companies that want to run UNIX but their gurus want source code.

The difference between Minix and the GNU kernel is the delivery date.
Minix is available now for IBM PC clones. Adding features like networking
and MMU/VM support should not be overly difficult. I wonder whether
having a full 4.3BSD on a personal computer is not overkill.
Personally I'd rather have an easily modifiable kernel.
For most purposes V7 is entirely adequate, although it certainly has its
shortcomings.

G. Weijers
PTT Dr. Neher Laboratories
Leidschendam, the Netherlands

disclaimer: the expressed opinions are entirely mine, and not necessarily
the opinions of my employer.


-- 
1
2
3
4

karl@sugar.UUCP (Karl Lehenbauer) (09/09/87)

In article <3470@islenet.UUCP>, richard@islenet.UUCP (Richard Foulk) writes:
> In article <8520@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
> > ...A less charitable view of this is that Stallman couldn't write a small
> > program to save his life.  Unfortunately, this is a common maladay nowadays.

> Leave it to someone who's been using small, out-dated equipment for
> years now to be so publicly unkind.

> How seemingly intelligent people can find the need to belittle
> others publicly, without provocation, is a mystery.

> But to attack someone who writes software and distributes it free of
> charge, to attack them because they don't cater to your particular
> obsolete machinery is amazingly selfish and stupid.

You chastise Henry Spencer for belittling others publicly but then you use 
this kind of inflammatory rhetoric to belittle him, publicly, in exactly the 
same manner you accuse him of doing, calling him "seemingly intelligent" and 
"amazingly selfish and stupid."  I thought his posting was pretty funny, and 
it almost certainly had a far less scabrous and more harmless intent than your 
rather vitriolic followup.

If everyone would work at using less confrontational language in their 
postings, I'd at least get a lot more of the things out of usenet that I'm
looking for, spending my time on usenet more efficiently as well.  Presumably
others would also find this to be true.
-- 
...!soma!uhnix1!sugar!karl    "Life, don't talk to me about life." - Marvin TPA

msf@amelia (Michael S. Fischbein) (09/10/87)

In article <2346@mmintl.UUCP> franka@mmintl.UUCP (Frank Adams) writes:
>In article <1883@encore.UUCP> adamm@encore.UUCP (Adam S. Moskowitz) writes:
>>Why kill yourself to deal with what many people feel is a bad hardware design?
>
>Well, I can tell you why *we* do it; but that particular reason isn't
>applicable to the Free Software Foundation.

[The "we" above is presumably Ashton-Tate ]

How about rephrasing Mr. Moskowitz's comment to be:
Why kill yourself to deal with what YOU feel is a bad hardware design?

If someone else wants to work on it, fine.  If someone wants to port the code
that I've written and promulgated for general use to their favorite odd
architecture, more power to them.  I feel no constraint to write code with
other people's machine peculiarities in mind, and do not attempt to constrain
them to write code for my machine.  If I did, I would be asking the intel
proponents how they could be so clumsy as to write code that wouldn't run on
my Iris.  I don't;  if someone is gracious enough to put useful code on the
net and it won't run as is, I'll port it: that has to be faster and easier
than starting from scratch.

		mike

Michael Fischbein                 msf@prandtl.nas.nasa.gov
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.

debray@arizona.edu (Saumya Debray) (09/11/87)

>>>> Stallman prefers using machines with larger address spaces ...

>>>A less charitable view of this is that Stallman couldn't write a small
>>>program to save his life.  

>>But to attack someone who writes software and distributes it free of
>>charge, to attack them because they don't cater to your particular
>>obsolete machinery is amazingly selfish and stupid.

>I run on a 2MB 68000 and a 8MB 68020 machine, which to me has cheap memory.
>Which machine do you think I worry more about keeping happy?  A 68000 that
>is `outdated', or the 68020?  The answer is so simple only totally stupid
>jerks (like the last bozo ;-) can't see it.

Someone's giving you software free of charge.  As I see it, he's doing you
a favour.  And what's the response?  He gets called names, his professional
competence is questioned.  Is this how you treat people who do you favours?
Sheesh!

For all who're so upset about not getting Stallman to write their software
for them, hey, port the software yourself if you have the balls, else shut
up.  If you're not paying this guy, he owes you nothing.
-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@arizona.edu
     uucp:       {allegra, cmcl2, ihnp4} !arizona!debray

peter@sugar.UUCP (09/11/87)

> I assume you left off the ":)" by accident. If GNU is supposed to be
> BSD 4.3 compatable it is a significantly more ambitious effort than
> MINIX. MINIX is a decent, small system for teaching. GNU is supposed
> to be suitable for research or commercial development.

Are you implying that Version 7 wasn't suitable for research or commercial
development? Remember... UNIX didn't start out as BSD 4.3 either. Thank
the gods (Thompson & Ritchie). BSD would never have run on the machines
available in the early and mid seventies, just as GNU won't run on the
personal computers available today.

> I have been looking for an inexpensive, Unix-based system for my
> personal use. MINIX isn't powerful enough to be useful to me, even 
> for hobby hacking. Hopefully GNU will be.

MINIX probably needs a better message passing mechanism, to avoid some of
the delays, and a bit of disk I/O optimisation. Otherwise it's quite a
usable system if you don't have anything better. Personally I prefer
AmigaDOS, but it's not designed to go anywhere... and MINIX is.

I will venture to predict that by the time GNU is out MINIX will be big
enough to satisfy you.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                 'U`  <-- Public domain wolf.

gertjan@nlcvx.convex.nl (Gertjan Vinkesteyn) (09/12/87)

please keep this discussion
out if this newsgroup.

thanks
-- 
UUCP and other network	  )\/(	in America: ..!seismo!mcvax!nlcvx!gertjan
  connections via mcvax	  )/\(	in Europe: ..!mcvax!nlcvx!gertjan
This note does not necessarily represent the position of Convex Computer BV
Therefore no liability or responsibility for whatever will be accepted.

ron@topaz.rutgers.edu.UUCP (09/12/87)

The MINIX kernel supports the most of the V7 system calls.  That is
not to say it is equivelent to what we grew up with as Version 7.
MINIX does not swap (at least not the one I have).  Devices work
different.  In addition, although it probably is good for it's
intended purpose (teaching) they way it goes about doing certain
things makes it a bit slower than it needs to be.  In addition,
the vast amount of user mode code are subsets of the V7 programs or
not there at all.

Anyhow, RMS is right.  He shouldn't be re-implementing either V7 or
BSD.  If I just wanted to USE a UNIX-like operating system, I'd use
one of the ones currently available that already runs on my PC.
It's senseless duplication of effort. 

Since RMS is trying to bring about social change in computer use,
I doubt that reimplementing the past is going to help.  Systems
he has developed previously have become succesful because they
were novel and worthwile, not because they were given away free.
I'd rather see him blazing his way for the next generation than
forced to reimplement the same old stuff so a few lazy hackers
could have free source code to diddle with.

-Ron

dave@sdeggo.UUCP (09/13/87)

In article <692@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
(there was more up here about GNU being better than Minix for software
development)
> Are you implying that Version 7 wasn't suitable for research or commercial
> development? Remember... UNIX didn't start out as BSD 4.3 either. Thank
> the gods (Thompson & Ritchie). BSD would never have run on the machines
> available in the early and mid seventies, just as GNU won't run on the
> personal computers available today.

It's not a case of wasn't; it was.  It isn't today (at least not a PDP-11
based version), and neither is Minix in its present, IBM PC form.  Minix
is an impressive effort, and I give Dr. Tanenbaum his due, but I would
hate to have to develop a large software package (has anyone ported "hack"
yet?) under it.  It's kind of like all the people who have taken Pascal
(designed as a _teaching_ language, to be hand compiled by an instructor!)
and wanted to try and develop real software in it.  It's possible, but it 
ain't pleasant.

BSD 4.3 would run just fine on an 80386 and it does run just fine on 68000's
and 68020, so there is no reason that GNU wouldn't.  "Personal computers 
available today" are available based on those chips, and that is where the 
market is heading.  

Wtih some work, (as Peter pointed out in his article, but I already threw 
away that part :-( ) Minix could be changed to be BSD compatible.  The first
task, though is to port it to a 68000 (with a good memory manager) or an
80386 and get around the 64K task size limit.  The rest could be added in
slowly.

This might beat GNU out the door, but I'm not sure of the status of the GNU 
project.  Why are so many people so anxious to beat on it?  Someone's doing 
you a favor and all you can do is bitch about how "it's not here, it's too 
big, it doesn't do what I want it to do, my machine can't run it..."?  Cripes, 
don't look a gift horse in the mouth.  Especially one that nobody's really 
seen yet.
-- 
David L. Smith
{sdcsvax!sdamos,ihnp4!jack!man, hp-sdd!crash}!sdeggo!dave
sdeggo!dave@sdamos.ucsd.edu 
"How can you tell when our network president is lying?  His lips move."

ast@cs.vu.nl (Andy Tanenbaum) (09/14/87)

In article <87@sdeggo.UUCP> dave@sdeggo.UUCP (David L. Smith) writes:
>With some work, ... Minix could be changed to be BSD compatible.  The first
>task, though is to port it to a 68000 (with a good memory manager) or an
>80386 and get around the 64K task size limit.  The rest could be added in
>slowly.

As has been pointed out already, MINIX has already been ported to the 68000,
albeit without an MMU.  That (Atari) version is now in beta testing.  The
Atari version does not have a 64K limit.

Actually, I think that if anyone is going to do that much work, a much better
idea is to modify MINIX to make if conform to the POSIX standard, which is the
UNIX of the future.

Here is a suggestion/request in that direction.  Will someone who is familiar
with POSIX draw up a list of POSIX system calls, noting for each one whether
 1. it is identical to the corresponding MINIX call
 2. it is different from the corresponding MINIX call
 3. it is not present in MINIX at all

Similarly, for each program and library routine in POSIX, a similar list would
be useful.  That way people who want to get together and make MINIX more "real
world", could at least have a specific list of what needs to be done, and the
target would be something that will eventually be an International Standard,
rather than 4.3, which is not likely to survive once an International Standard
has been established for UNIX systems.

Andy Tanenbaum (ast@cs.vu.nl)

fdr@apollo.uucp (Franklin Reynolds) (09/14/87)

In article <692@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>
>> MINIX is a decent, small system for teaching. GNU is supposed
>> to be suitable for research or commercial development.
>
>Are you implying that Version 7 wasn't suitable for research or commercial
>development? 

Are you impying that MINIX is suitable for research or commercial
development? 

All I implied was that MINIX was not suitable for research or commercial
development. MINIX is similar, but not equal, to Version 7. Some of the
features that are lacking are very important. On the other hand, 
Version 7 is pretty old and lacks some things that are important to *me*.
>
>MINIX probably needs a better message passing mechanism, to avoid some of
>the delays, and a bit of disk I/O optimisation. Otherwise it's quite a
>usable system if you don't have anything better. Personally I prefer
>AmigaDOS, but it's not designed to go anywhere... and MINIX is.
>
>I will venture to predict that by the time GNU is out MINIX will be big
>enough to satisfy you.

It is not clear to me that MINIX needs much in the way of extentions. I
think it is welled suited to the task it was designed for, i.e., teaching. 
However, I want a system with virtual memory (large address space), 
demand paging, distributed file system, communications and database 
support. I suspect, though I can't prove, that these facilities are 
important to an awful lot of researchers and commercial developers. If 
MINIX is extended along these lines then I will check it out again, 
otherwise, I will wait for GNU.

Franklin Reynolds
Apollo Computer
mit-eddie!apollo!fdr

preston@felix.UUCP (Preston Bannister) (09/14/87)

In article <87@sdeggo.UUCP> dave@sdeggo.UUCP (David L. Smith) writes:
>It's kind of like all the people who have taken Pascal
...
>and wanted to try and develop real software in it.  It's possible, but it 
>ain't pleasant.

Please, no language R-wars.  I've used C, Modula-2, Pascal and Lisp quite
a lot at different times.  None of them is universally preferable to
the other.  And yes, I _have_ developed "real" software in Pascal. 

>BSD 4.3 would run just fine on an 80386 and it does run just fine on 68000's
>and 68020, so there is no reason that GNU wouldn't.  "Personal computers 
>available today" are available based on those chips, and that is where the 
>market is heading.  

Close but not quite.  The 68000, 68010 and 68020 cannot support demand
paged virtual memory without an MMU.  The (original) 68000 has a basic
flaw that makes implementing virtual memory ugly even _with_ an MMU.

The only easily availible (i.e. low cost) machine with a suitable
support for demand paged virtual memory is the 80386-based PC family.
The next closest candidate is the Mac II, which will have a reasonable
MMU sometime soon (when Motorola gets it's new MMU chip out the door).

That leaves out the vast majority of personal computers.

>With some work, 
...
>Minix could be changed to be BSD compatible.  

I would think that the best long-term goal would be to make Minix as
close to Posnix compatible as possible.

>The first
>task, though is to port it to a 68000 (with a good memory manager) or an
>80386 and get around the 64K task size limit.  The rest could be added in
>slowly.

The add something constructive to this discussion... :-)

I think what we should _really_ be thinking about is how to make Minix
match the availible machines more closely.  One thing about Unix that
has always bugged me is the fork() primitive.  The fork primitive
_assumes_ that you can make an _exact_ copy of a running process.
That's rather difficult (expensive, ugly...) on a machine that doesn't
support the right flavor of virtual memory.  It also ties the process
abstraction and the virtual address space abstraction together (i.e.
you can't have more than one process in the same address space).

Most uses of fork() are immediately followed by an exec() call.  Why
else do you think the BSD people came up with vfork()?  A direct "start
program in a seperate process" call would accomplish the same effect
and would be easily implementable on machines without hardware support
for (the right flavor of) virtual memory.  

I'd rather see primitives like "start process in same address space",
"start process in new address space", and "start program in new
address space".  (Perhaps this could be orthogonalized by combining
primitives for program-loading, process creation, and address space
creation).  I suspect with a little thought, the primitives could be
implemented _efficiently_ on a much wider range of machines.

--
Preston L. Bannister
USENET	   :	ucbvax!trwrb!felix!preston
BIX	   :	plb
CompuServe :	71350,3505
GEnie      :	p.bannister

henry@utzoo.UUCP (Henry Spencer) (09/15/87)

> Leave it to someone who's been using small, out-dated equipment for
> years now to be so publicly unkind.

Actually, I'm about to start using much larger and more modern equipment.
This does not diminish my distaste for software that seems to be written
on the assumption that 4MB memory boards cost a nickel apiece.

To pick a non-GNU example, graphing the size of the ls(1) command versus
time is an interesting exercise, not to be recommended if you are susceptible
to nausea and vomiting.  To pick an example that is ready at hand, the Sun
3.2 ls(1) is four times the size of the V7 ls(1).  It's not four times as
good; the improvement in functionality might charitably be put at 25%.

This sort of gratuitous bloat is endemic in post-PDP11 software.  While I
do not claim that 16-bit address spaces are anything but a pain -- I have
much more experience with this than most of my readers! -- they do tend
to teach respect for resource consumption.  I will not be sorry to leave
the PDP11 behind, but it won't be trivial to make our glorious new 32-bit
machine support as many users -- doing the same things! -- as our lousy
little 11/44, despite much more memory and a much faster CPU.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

henry@utzoo.UUCP (Henry Spencer) (09/15/87)

> > A less charitable view of this is that Stallman couldn't write a small
> > program to save his life.  Unfortunately, this is a common maladay nowadays.
> 
> It's not only less charitable, it's dumb.  Having met Richard and dealt with
> him technically (although not a lot), I'd bet he *could* write a small
> program...

Actually, having said as much in private mail, I might as well say it in
public:  Stallman *is* quite competent -- I may question his judgement but
not his ability -- and certainly could write a small program to save his life.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

jbs@eddie.MIT.EDU (Jeff Siegal) (09/16/87)

In article <8579@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>[...]
>To pick a non-GNU example, graphing the size of the ls(1) command versus
>time is an interesting exercise, not to be recommended if you are susceptible
>to nausea and vomiting.  To pick an example that is ready at hand, the Sun
>3.2 ls(1) is four times the size of the V7 ls(1).  It's not four times as
>good; the improvement in functionality might charitably be put at 25%.

However, a more meaningful exercise would be to graph the cost of the
memory used by ls(1) versus time.  This is not recommended, if you are
likely to be nauseated by the failure of Unix developers to take best
advantage of available "resources".

Seriously, rapidly changing conditions are a fact in the computer
industry.  To attempt to use such volitile constraints as a metric
over time doesn't make too much sense.

Jeff Siegal

rlk@think.COM (Robert Krawitz) (09/16/87)

In article <7320@felix.UUCP> preston@felix.UUCP (Preston Bannister) writes:
]In article <87@sdeggo.UUCP> dave@sdeggo.UUCP (David L. Smith) writes:
]>BSD 4.3 would run just fine on an 80386 and it does run just fine on 68000's
]>and 68020, so there is no reason that GNU wouldn't.  "Personal computers 
]>available today" are available based on those chips, and that is where the 
]>market is heading.  
[...386 boxen and Mac II w/68851]
]That leaves out the vast majority of personal computers.

These days.  How about t[gettimeofday()+2yrs]?  That, after all, is
what the FSF is heading for.

]>With some work, 
]...
]>Minix could be changed to be BSD compatible.  
]
]I would think that the best long-term goal would be to make Minix as
]close to Posnix compatible as possible.

Minix was designed as a teaching system, not as a production system.
It would take lots of hacking to hammer it into shape.  For that
matter, so would GNU, I suppose, but it's starting from a premise that
strikes me as more amenable to a production machine.

]I think what we should _really_ be thinking about is how to make Minix
]match the availible machines more closely.  One thing about Unix that
]has always bugged me is the fork() primitive.  The fork primitive
]_assumes_ that you can make an _exact_ copy of a running process.
]That's rather difficult (expensive, ugly...) on a machine that doesn't
]support the right flavor of virtual memory.  It also ties the process
]abstraction and the virtual address space abstraction together (i.e.
]you can't have more than one process in the same address space).

Hmm...seems to me that you could implement fork() on an 8086, if you
used a small memory model and programs cooperated (didn't play with
the segment registers).  The PDP11 didn't have virtual memory either,
if my memory serves me.  To run multiple processes concurrently you
need memory management, but not necessarily virtual memory.  To fork,
you need some way of creating distinct address spaces.  You certainly
don't need virtual memory.

Since Minix runs on an 8086 just fine, and that's the weakest
processor that anyone has any interest in running multiprocess things
on, I don't see why this is a problem.

]Most uses of fork() are immediately followed by an exec() call.  Why
 ^^^^
]else do you think the BSD people came up with vfork()?  

For these cases.
							 
							 A direct "start
]program in a seperate process" call would accomplish the same effect
]and would be easily implementable on machines without hardware support
]for (the right flavor of) virtual memory.  

That already exists in BSD and probably in SysV, but as a library call
implemented in terms of fork() (or vfork()) and exec().  In fact,
Kernighan & Pike give an example of a rewritten system() useful for
special purposes (it's been a while since I read it, so forgive me for
no page or context reference).

]I'd rather see primitives like "start process in same address space",
]"start process in new address space", and "start program in new
]address space".  (Perhaps this could be orthogonalized by combining
]primitives for program-loading, process creation, and address space
]creation).  I suspect with a little thought, the primitives could be
]implemented _efficiently_ on a much wider range of machines.

Vfork in BSD is implemented reasonably efficiently (compared to fork,
at least), and it's hard to see why it couldn't be implemented
correctly on a non-VM machine.  Remember that when you do a vfork, the
child process shares the address space of the parent (indeed, the
parent goes to sleep until the child exits or forks).  This may be
what you're looking for.

But remember, fork() is useful by itself in many circumstances, such
as daemons that handle more than one input or output channel
(sendmail, inetd, etc.).

Personally, I'd rather see machines that have facilities powerful
enough to run big, powerful software like GNU and emacs.  I don't care
for software that's been held back because of {IBM,Microsoft,Intel}
brain damage.  Just because a large blue company and clones sell more
computers than anyone else doesn't mean that it's the way to go.  I'm
more interested in 68020/68030, 80386 or 80486 whenever that happens
because the life cycle will probably be a lot longer.

Robert^Z

kent@xanth.UUCP (09/17/87)

In article <6886@eddie.MIT.EDU> jbs@eddie.MIT.EDU (Jeff Siegal) writes:
>In article <8579@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>>[...]
>>To pick a non-GNU example, graphing the size of the ls(1) command versus
>>time is an interesting exercise, not to be recommended if you are susceptible
>>to nausea and vomiting.  To pick an example that is ready at hand, the Sun
>>3.2 ls(1) is four times the size of the V7 ls(1).  It's not four times as
>>good; the improvement in functionality might charitably be put at 25%.
>
>However, a more meaningful exercise would be to graph the cost of the
>memory used by ls(1) versus time.  This is not recommended, if you are
>likely to be nauseated by the failure of Unix developers to take best
>advantage of available "resources".
[...]
>Jeff Siegal


Not that I love jumping onto Henry's side of the fence; I still think his
criticism of RMS was out of line, but...

Here at school, and at every other facility I've used over 2.5 decades,
the system is always constrained by storage resources.  A program four
times as big costs four times as much to store, and while it may be a
nickel or a dime for ls(1), when creeping bloatitis overtakes all the
software, you get nicke-dimed to death.

Second, another correspondent noted that in ten million process activations
on his system, 98% took less than two seconds of cpu time.  This means that
loading them was a significant fraction of all the work done in executing
them.  Because system bandwidth (Gee, an architectural issue in comp.arch!)
increases drive up the cost of a whole system and almost all its components
dramatically, this is the area where improvements are slowest.  Tying up
precious bandwidth to load unused portions of over-featured programs is a
big loser, and I think this tips the scales to Henry's side of the argument.
We are rapidly headed toward being I/O bound simply due to program load
costs.

To put it another way, graph the time to load ls(1) versus date of the
version for the versions mentioned and the systems on which they run,
and weep.

For an example close to home, my Amiga is doing good if it can drag
programs off a hard disk at 30K bytes/second over an SCSI interface.
Even though, given the money, I can expand the system to 12.5
megabytes of memory, and put a 760 megabyte hard disk on it, waiting
15 seconds for a 0.5 megabyte editor to load from hard disk is a real
drag.

Kent, the man from xanth.

"His expression lit up.  'Hey, you wouldn't be a dope smuggler, would you?'

Rail looked confused.  'Why would anyone wish to smuggle stupidity when
there is so much of it readily available?'"

		-- Alan Dean Foster, GLORY LANE

jbs@mit-eddie.UUCP (09/18/87)

In article <2473@xanth.UUCP) kent@xanth.UUCP (Kent Paul Dolan) writes:
)In article <6886@eddie.MIT.EDU) jbs@eddie.MIT.EDU (Jeff Siegal) writes:
))In article <8579@utzoo.UUCP) henry@utzoo.UUCP (Henry Spencer) writes:
)))[...]graphing the size of the ls(1) command versus
)))time is an interesting exercise[...]
))
))However, a more meaningful exercise would be to graph the cost of the
))memory used by ls(1) versus time.  [...]
)
)[...]
)We are rapidly headed toward being I/O bound simply due to program load
)costs.
)

On the system I am now on (and the Sun Henry was refering to)

% file /bin/ls
/bin/ls:	demand paged pure executable
                ^^^^^^^^^^^^

)To put it another way, graph the time to load ls(1) versus date of the
)version for the versions mentioned and the systems on which they run,
)and weep.
)
)For an example close to home, my Amiga is doing good if it can drag
)programs off a hard disk at 30K bytes/second over an SCSI interface.
)[...]

Again, if you take into account the changing conditions that exist in
computer technology, things look much better.  30KB/sec is horribly
slow for a Unix system (Most Sun's use Eagles, with rates of 1.8MB/sec
or higher).  I suspect that over the past 15 years, typical transfer
rates for disks on Unix systems have improved by at least a factor of
four, although I do not have the hard data to present here.

Jeff Siegal

peter@sugar.UUCP (Peter da Silva) (09/19/87)

In article <37461f69.ccb2@apollo.uucp>, fdr@apollo.UUCP writes:
> In article <692@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
> >> MINIX is a decent, small system for teaching. GNU is supposed
> >> to be suitable for research or commercial development.
> >Are you implying that Version 7 wasn't suitable for research or commercial
> >development? 
> Are you impying that MINIX is suitable for research or commercial
> development? 

No, I was inferring that you were implying that Version 7 wasn't.

> All I implied was that MINIX was not suitable for research or commercial
> development.

Actually, that's what you stated. You implied by the analogies between V7 and
MINIX, and GNU and BSD, that V7 wasn't. Right now, neither MINIX nor GNU
is suitable for commercial development... MINIX because it's incomplete,
and GNU because it's nonexistent.

> MINIX is similar, but not equal, to Version 7. Some of the
> features that are lacking are very important. On the other hand, 
> Version 7 is pretty old and lacks some things that are important to *me*.

MINIX can be made equal to V7. And then it can be made better. In the
meantime you and I and everyone else can work on doing this. GNU might
be the greatest system since OS/360, but first it has to come out.

> It is not clear to me that MINIX needs much in the way of extentions. I
> think it is welled suited to the task it was designed for, i.e., teaching. 

Yes, MINIX is now pretty much in the state UNIX was in the 5th edition
days. UNIX was never intended to be the most popular minicomputer operating
system ever, but it did a good job. I think that with today's software
tools MINIX can follow the 12 year path of UNIX in considerably less than
12 years.

> I suspect, though I can't prove, that these facilities are 
> important to an awful lot of researchers and commercial developers.

Not important enough to keep them from using Messy-DOS, it seems.

> If MINIX is extended along these lines then I will check it out again, 
> otherwise, I will wait for GNU.

Got a good book to read? I'd recommend "Software Tools" by Kernighan and
Plauger. If GNU ever comes out, then I'll check it out for the first time.
If I worked at Apollo too then maybe I'd be content to just sit and wait.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Insert cute saying here.

lmcvoy@eta.ETA.COM (Larry McVoy) (09/19/87)

In article <2473@xanth.UUCP> kent@xanth.UUCP (Kent Paul Dolan) writes:
>[ argument that says program should be small because of load time ]

Um, it seems to me that this issue is nice handled by demand paging.
Provided that your pages don't get big (1-10k is probably cool), you
don't really pay for anything that you don't use.  You lean on locality
a lot, but if you're really worried about it, profiling and reordering 
code will kill that too.

Get a VM machine & kwit yer bitchin :-)
-- 

Larry McVoy	uucp: ...!{uiucuxc, rosevax, meccts, ihnp4!laidbak}!eta!lmcvoy
		arpa: eta!lmcvoy@uxc.cso.uiuc.edu

chips@usfvax2.UUCP (Chip Salzenberg) (09/19/87)

In article <7320@felix.UUCP>, preston@felix.UUCP writes:
>
> Most uses of fork() are immediately followed by an exec() call.  Why
> else do you think the BSD people came up with vfork()?  A direct "start
> program in a seperate process" call would accomplish the same effect
> and would be easily implementable on machines without hardware support
> for (the right flavor of) virtual memory.  
 
There is one thing about UNIX fork()-exec() that you've overlooked --
after the fork(), the child process can set up the environment of the
soon-to-be-exec'ed process by modifying its own environment.
(Can you say `pipes, I/O redirection and current directory'? I knew you could.)

> I'd rather see primitives like "start process in same address space",
> "start process in new address space", and "start program in new
> address space".
> --
> Preston L. Bannister

Under OS-9, there is no UNIX-style fork(); there is a combined fork()-exec().
Granted that it's efficient on 6809 and 680x0 even without an MMU, it's still
a royal pain.  The parent process must dup() its file descriptors, open the
child's, fork(), close the child's, and restore its own.  And those dup'ed
file descriptors are still open on the child!  Blech.

I'd like `new process, same address space', but for spawning another program,
E{NIX semanitics are elegant and useful.
-- 
Chip Salzenberg            UUCP: "uunet!ateng!chip"  or  "chips@usfvax2.UUCP"
A.T. Engineering, Tampa    Fidonet: 137/42    CIS: 73717,366
"Use the Source, Luke!"    My opinions do not necessarily agree with anything.

aglew@ccvaxa.UUCP (09/20/87)

> [Kent, the man from Xanth, writes, paraphrased]:
..> If 98% of the programs run are short-lived,
..> then this is an argument for short simple programs with few features:
..> why waste time loading the unused parts of programs with many options?

On the other hand, I regularly use a program with many bells and whistles
(GNU EMACS), but load it up only once and leave it running all day,
sometimes for days or weeks. The cost of the exec is paid only once,
while the cost of loading libraries I don't need is paid only when I
use them.

If I was using a less fully featured editor, I would be spending much
more time going in and out, exec'ing and exit'ing.

However, I have noticed that GNU makes it unnecessary for me to use
the fancy features of short, transitory, utilities.
	For example, I hardly ever use more, because I can scroll my shell
buffer much more conveniently in EMACS.
	I am going back to using the Bourne shell as my command line
interface. True, I'm doing this for testing by use, but I'm willing to do
it, while other developers don't want to give up the csh, because EMACS
gives me the things you like about the csh - filename completion, history -
when I use the Bourne shell in a shell buffer.
	Line oriented interfaces are much more tolerable under EMACS.
Adb under EMACS is almost pleasant!

I think that I need a fully featured environment to live in;
but the fully featured environment makes simple tools much more tolerable.

Andy "Krazy" Glew. Gould CSD-Urbana.    USEnet:  ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801    ARPAnet: aglew@gswd-vms.arpa

I always felt that disclaimers were silly and affected, but there are people
who let themselves be affected by silly things, so: my opinions are my own,
and not the opinions of my employer, or any other organisation with which I am
affiliated. I indicate my employer only so that other people may account for
any possible bias I may have towards my employer's products or systems.

mouse@mcgill-vision.UUCP (09/20/87)

In article <2117@eecae.UUCP>, lawitzke@eecae.UUCP (John Lawitzke) writes:
>> Minix is v7 - (things you didn't know about, and don't want even if
>> you did), the GNU kernel should be 4.3BSD + (things) - (security
>> features).
> The GNU kermel should be 4.3BSD + (things) + (security features)

> What security features don't you want?

In general, anything which serves no purpose but security.

> Separate userids and passwords?

Lisp Machines have userids but everyone is a super-user, to use the
UNIX terminology.  They have no passwords.  They get by fine.

> File protection modes?

I could live without file protections.  (I already do.  My login has
uid 0.)

> Disk quotas?

I could DEFINITELY live without disk quotas.  We have them turned off
at the moment!

> checking the name of a system uucping in?

Close call (pun deliberate).  But if you are going to go passwordless
there is no reason to bother checking this.

I am clearly an extreme case.  Let's provide a larger-scale example.
We run a 4.3 derivative here.  Two security "features" I wouldn't mourn
come to mind immediately.  One was simple to disable and benign once
disabled; the other is inexcusable because NO MEANS WAS PROVIDED TO
DISABLE IT.  The first one is the "secure" option in /etc/ttys (all our
lines are marked secure); the other is the restriction that only users
in group 0 are allowed to su to root even with the root password.  We
had to recompile su to get rid of this one.

I don't mind security "features" if they don't get underfoot (eg,
distinct userids).  But when they do, I'd better be able to disable
them!  For example, Sun's NFS normally maps uid 0 into uid -2 for
remote requests, as a security "feature".  However, I can live with
this because Sun documents how to fix it and once fixed it is fixed for
good.  (Well, until the next release....)

					der Mouse

				(mouse@mcgill-vision.uucp)

greg@ncr-sd.SanDiego.NCR.COM (Greg Noel) (09/21/87)

In article <8490@think.UUCP> rlk@THINK.COM writes:
>....  The PDP11 didn't have virtual memory either,
>if my memory serves me.  ....

At the risk of re-opening an old debate, the PDP-11 \does/ have virtual
memory.  It's just that, for various technical reasons, the original Unix
implementation for it chose to use swapping instead of paging as its virtual
memory technique.

Yes, it's a nit, but the PDP-11 is a fine machine, and deserves to be
remembered correctly.
-- 
-- Greg Noel, NCR Rancho Bernardo     Greg.Noel@SanDiego.NCR.COM

preston@felix.UUCP (09/21/87)

In article <838@usfvax2.UUCP> chips@usfvax2.UUCP (Chip Salzenberg) writes:

>There is one thing about UNIX fork()-exec() that you've overlooked --
>after the fork(), the child process can set up the environment of the
>soon-to-be-exec'ed process by modifying its own environment.
>(Can you say `pipes, I/O redirection and current directory'? I knew you could.)

The point I was trying to make was that the fork() is typically
followed by an exec() call and that the full semantics of fork() are
not needed in those cases.  What I should have said :-) was that the
code executed between the fork() and exec() call typically does not
need the full semantics of fork().

An extreme example would be to imagine, say, a program that is using a
large amount of memory that wants to run some other program with the
output into a pipe that it would read.

The sequence would be something like:

	- open pipe
	- fork
	- assign one end of pipe for child's standard output
	- exec
	- (parent process now can read output of child from pipe)

The fork call is going to be awefully expensive, as a copy of the
entire data space of the parent will be made.  The code between the
fork() and exec() really doesn't need that copy, as all it does is make
a small modification to the context of the child process.

We need to factor out the management of process contexts (the set of
open files, mapping of signals, etc.) from the management of the
address/data space.

What I'm looking for is a good set of primitives.  From my point of
view, fork() and exec() are not really primitives, as they represent a
number of different operations.

Fork() can be factored to:

	- create new process context
	- copy parent process context to new context
	- create new address space
	- copy parent's data to new address space

Exec() can be factored to:

	- delete address/data space
	- create new address/data space
	- load code from given file
	- begin execution of code from file

(Did I miss something?)

Current typical uses of fork()/exec() could be replaced by:

	- create new process context
	- copy parent process context to new context
	(typical code to reassign files, signals, etc)
	- create new address/data space
	- load code from given file
	- begin execution of code from file

--
Preston L. Bannister
USENET	   :	ucbvax!trwrb!felix!preston
BIX	   :	plb
CompuServe :	71350,3505
GEnie      :	p.bannister

henry@utzoo.UUCP (Henry Spencer) (09/22/87)

> Most uses of fork() are immediately followed by an exec() call.  Why
> else do you think the BSD people came up with vfork()?  A direct "start
> program in a seperate process" call would accomplish the same effect
> and would be easily implementable on machines without hardware support
> for (the right flavor of) virtual memory.  

Be careful.  Most uses of fork() are *not* immediately followed by an
exec() call; the two are usually separated by some manipulation of things
like file descriptors and signals.  A combined fork-exec is definitely
possible, since the OS/9 people did it, but it's not as simple as it
sounds at first.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

peter@sugar.UUCP (Peter da Silva) (09/22/87)

> [you all know the argument: bigger programs means more time wasted
>  on loads]
> 
> Again, if you take into account the changing conditions that exist in
> computer technology, things look much better.  30KB/sec is horribly
> slow for a Unix system (Most Sun's use Eagles, with rates of 1.8MB/sec
> or higher).

But people with personal computers usually have SASI or SCSI hard disks
with really low transfer rates. This is the market a public domain O/S
has to target. People with Big Iron aren't interested in public domain:
the O/S software itself is a minor cost (well, except for IBM mainframes).
Anything with an Eagle is effectively Big Iron to us peons.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Have you hugged your wolf today?

brb@hafro.UUCP (Bjorn R. Bjornsson) (09/24/87)

In article <6886@eddie.MIT.EDU>, jbs@eddie.MIT.EDU (Jeff Siegal) writes:
>In article <8579@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>>... the Sun 3.2 ls(1) is four times the size of the V7 ls(1). ...
>
>However, a more meaningful exercise would be to graph the cost of the
>memory used by ls(1) versus time.  This is not recommended, if you are
>likely to be nauseated by the failure of Unix developers to take best
>advantage of available "resources".
>
>Seriously, rapidly changing conditions are a fact in the computer
>industry.  To attempt to use such volitile constraints as a metric
>over time doesn't make too much sense.

True as this may be, keep in mind that the programs in question:

 i)	Provide comparable functionality.
 ii)	Were written in the same source language.
 iii)	Were compiled by compilers that produce code
	of comparable density.

It follows that a step backwards has been taken in programming
methodology, unless the newer version shows considerable gains
in efficiency or the older version was highly optimized.

It's dismaying to see minor [or no] upgrades to software
functionality, eat up a good portion of the order of magnitude
difference in hardware.

If nothing else, the PDP-11s were good teachers to those of
us fortunate [upto say 1980] enough to spend years working
on them, as well as those of us that were unfortunate [say
after 1980] to spend years with them B-).

Maybe we should have started a relief fund for Henry years
ago?  Anyway I'm happy to see people get their hands on
decent machinery, the ones that can use it that is.

		Bjorn R. Bjornsson
		{uunet!mcvax, enea}!hafro!brb

chips@usfvax2.UUCP (09/24/87)

In article <7672@felix.UUCP>, preston@felix.UUCP (Preston Bannister) writes:
} In article <838@usfvax2.UUCP> chips@usfvax2.UUCP (Chip Salzenberg) writes:
} 
} >There is one thing about UNIX fork()-exec() that you've overlooked --
} >after the fork(), the child process can set up the environment of the
} >soon-to-be-exec'ed process by modifying its own environment.
} >(Can you say `pipes, I/O redirection and current directory'?
} >I knew you could.)
} 
} What I should have said :-) was that the
} code executed between the fork() and exec() call typically does not
} need the full semantics of fork().
} 
} An extreme example would be to imagine, say, a program that is using a
} large amount of memory that wants to run some other program with the
} output into a pipe that it would read.
} 
} We need to factor out the management of process contexts (the set of
} open files, mapping of signals, etc.) from the management of the
} address/data space.
} 
} --
} Preston L. Bannister

I agree that fork() does much more than is necessary in some cases.

For example, I use an editor that forks twice as preparation for running
an area of text through a filter; that really _is_ expensive, since our editor
is quite large.  (No, not emassive :-}; the Rand editor `E'.)
-- 
Chip Salzenberg            UUCP: "uunet!ateng!chip"  or  "chips@usfvax2.UUCP"
A.T. Engineering, Tampa    Fidonet: 137/42    CIS: 73717,366
"Use the Source, Luke!"    My opinions do not necessarily agree with anything.

root@hobbes.UUCP (09/25/87)

> Most uses of fork() are immediately followed by an exec() call.

When this came up last spring, someone (gwyn?) grep'd thru all the Unix
source and found a grand total of TWO places where fork() was immediately
followed by exec()!  All the others had dup()s, close()s, and setuid()s, etc
stuck in between.  In the future, please try to ground your "facts" in
something solid before making sweeping statements like that...

   John

-- 
John Plocher uwvax!geowhiz!uwspan!plocher  plocher%uwspan.UUCP@uwvax.CS.WISC.EDU

peter@sugar.UUCP (Peter da Silva) (09/25/87)

In article <1745@ncr-sd>, greg@ncr-sd (Greg Noel) writes:
> In article <8490@think.UUCP> rlk@THINK.COM writes:
> >....  The PDP11 didn't have virtual memory either,
> >if my memory serves me.  ....
> 
> At the risk of re-opening an old debate, the PDP-11 \does/ have virtual
> memory.  It's just that, for various technical reasons, the original Unix
> implementation for it chose to use swapping instead of paging as its virtual
> memory technique.

And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
probably because it didn't in fact have the capability of supporting VM.
Why do you think DEC developed the Virtual Address Extension (VAX) in the
first place?

> Yes, it's a nit, but the PDP-11 is a fine machine, and deserves to be
> remembered correctly.

As a great little non-virtual system. Nothing wromng with that. Sometimes
virtual memory means virtual performance, as a good many PDP-11 fans have
pointed out. You can run way more users on and get way better real-time response
from a PDP 11/70 than any VAX you care to name.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Have you hugged your wolf today?
-- Disclaimer: These aren't mere opinions... these are *values*.

peter@sugar.UUCP (Peter da Silva) (09/26/87)

Just one question. What are you doing between the time you deleted the address
and data space and leaded the new one? Running out of /dev/null?

Yes, fork isn't a machine primitive. Very few UNIX system calls are. Read, for
example, "should" be:

	Read required block from file.
	Copy data from block to buffer.
	If any more blocks need to be read, repeat.
	Update current pointer.

A number of operating systems expect you to do all this stuff. I'll stick with
UNIX and read/write/fork/exec, thanks.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Have you hugged your wolf today?
-- Disclaimer: These aren't mere opinions... these are *values*.

kinsell@hpfcda.UUCP (09/26/87)

>> Again, if you take into account the changing conditions that exist in
>> computer technology, things look much better.  30KB/sec is horribly
>> slow for a Unix system (Most Sun's use Eagles, with rates of 1.8MB/sec
>> or higher).
>
>But people with personal computers usually have SASI or SCSI hard disks
>with really low transfer rates. 

Sorry for the drift, but . . .

The numbers quoted above aren't terribly fair to SCSI.  Even some extremely
inexpensive SCSI discs available now have burst rate capability of 1.5 Meg/sec
on the bus, although with lower average rates due to the mechanisms.  The
1.8 Meg figure for the Eagle sounds like a sustainable average rate if doing
a raw read, but going through a file system slows things down considerably.

-Dave Kinsell
 hplabs!hpfcla!d_kinsell

greg@ncr-sd.SanDiego.NCR.COM (Greg Noel) (09/26/87)

>In article <1745@ncr-sd>, greg@ncr-sd (Greg Noel) writes:
>> ... the PDP-11 \does/ have virtual
>> memory.  It's just that, for various technical reasons, the original Unix
>> implementation for it chose to use swapping instead of paging as its virtual
>> memory technique.

In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
>probably because it didn't in fact have the capability of supporting VM.

You don't give the criteria by which you make this rather bald claim, but I'll
try to respond to it anyway.

The test for virtual memory is whether the name space of the process is
independent of the name space of the processor; that is, the memory seen by the
process is the same, no matter where it is located in physical (real) memory.
In other words, if the process can be moved to a new place in memory and is
unaware that it has been moved, then the process memory is virtual, and the
architecture supports virtual memory.  The PDP-11 passes this test.

If you are trying to say that the PDP-11 didn't permit a process to be run
without all of its image mapped, that is indeed true of some processor models.
(But not all models -- I've seen experimental versions of Unix on the PDP-11
that were demand paged.)  But this isn't a requirement for virtual memory; it
just makes alternative virtual memory schemes (like paging) more attractive.

>Why do you think DEC developed the Virtual Address Extension (VAX) in the
>first place?

Yes, DEC did indeed extend the amount of virtual memory available, from a
maximum of two 64k pieces to ~2G.  I'll even agree that that's quite an
extension -- over four orders of magnitude.  But they extended the amount,
not the concept.

>..... Sometimes
>virtual memory means virtual performance, as a good many PDP-11 fans have
>pointed out. You can run way more users on and get way better real-time
>response from a PDP 11/70 than any VAX you care to name.

There's some truth in that.  I suspect that it's more of a function of
the increasing size of programs (and the disk cost to read them in) than
a function of the virtual memory architecture (although it \is/ due to
virtual memory that programs have been able to bloat so much).  But that
is currently being mooted elsewhere, so I won't get on that soap box here.

BTW, I'm assuming that you meant "interactive" when you said "real-time,"
since in an actual hard-real-time environment, I would want my tasks running
with the memory management turned off and in a soft-real-time environment, I
would want my tasks locked down.  In either case, the overhead for virtual
memory would be similar and the faster processor would win.
-- 
-- Greg Noel, NCR Rancho Bernardo     Greg.Noel@SanDiego.NCR.COM

mash@mips.UUCP (John Mashey) (09/26/87)

In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>In article <1745@ncr-sd>, greg@ncr-sd (Greg Noel) writes:
>> At the risk of re-opening an old debate, the PDP-11 \does/ have virtual
>> memory.  It's just that, for various technical reasons, the original Unix
>> implementation for it chose to use swapping instead of paging as its virtual
>> memory technique.

>And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
>probably because it didn't in fact have the capability of supporting VM.
>Why do you think DEC developed the Virtual Address Extension (VAX) in the
>first place?

Greg was right in the first place.  As a good example, somebody at
Naval Postgraduate school did a thesis where they modified UNIX to
run demand-paged on an 11/55, and did various performance measurements.
they found the paged version was superior in only a small domain.
I recall the thesis included an honest, but somewhat chagrined comment
like "Thompson was right; simple wins in this case".

Why should this be? The technical reasons Greg alludes to are simple:

If programs are relatively small [64K I + 64K D max]
and if pages are relatively large (compared to size) [8K],
then programs are likely to touch every or almost every page very quickly.
In this case, you get better disk performance, and have simpler,
denser kernel data structures, by swapping instead of paging.
I.e., if working set = size of program, you might as well swap.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

batson@cg-atla.UUCP (Jay Batson X5927) (09/26/87)

In article <233@hobbes.UUCP> root@hobbes.UUCP (John Plocher) writes:
>> Most uses of fork() are immediately followed by an exec() call.
>
>When this came up last spring, someone (gwyn?) grep'd thru all the Unix
>source and found a grand total of TWO places where fork() was immediately
>followed by exec()!  All the others had dup()s, close()s, and setuid()s, etc
>stuck in between.  In the future, please try to ground your "facts" in
>something solid before making sweeping statements like that...

Well now, I don't myself usually get in the middle of other persons
arguments, but when a flamer is himself working from heresay, I take
exception.

In response to this accusation of John's, I did this:
	less `egrep -l /src/bin/*.c /src/ucb/*.c`
Not a completely extensive search, but a good sample.  The results are:
	Programs forking			Programs forking
	then (more or less) exec'ing		but NOT exec'ing
	(possibly conditionally)
------------------------------------- --------------------------------
cc					mail
du					wall
ed					leave
login					rsh
rmail
time
write
rcp
rlogin
sccs
script

Note that this search did in fact reveal that there is often SOME
code between fork and exec (not always), therefore somewhat validating
John.  However, from the point of view of doing UNIX/C coding for
a few years (I mean 5 isn't many, but more than some), I recall
the vast majority of programs that fork usually exec QUITE
quickly after the fork, and any processing done afterwards is often
just laziness (it could have been set up before the fork - flames to
/dev/null - I know there are times when you can't so setup...).

Therefore children, what is the lesson for today?  Heresay is dangerous....


Jay
_________________________________________________________________________
When I'm finally old enough to believe my opinions are sufficiently wise
and important for others to listen to, nobody will want to listen to "that
old fart...."
	

nipps@aa.ecn.purdue.edu (James L Keane) (09/27/87)

Somehow I fail to realize how a PDP's VM capability 

gwyn@brl-smoke.ARPA (Doug Gwyn ) (09/27/87)

In article <7672@felix.UUCP> preston@felix.UUCP (Preston Bannister) writes:
>The fork call is going to be awefully expensive, as a copy of the
>entire data space of the parent will be made.

It need not be; don't copy the pages until a attempt to modify them occurs.

peter@sugar.UUCP (Peter da Silva) (09/28/87)

In article <1755@ncr-sd.SanDiego.NCR.COM>, greg@ncr-sd.SanDiego.NCR.COM (Greg Noel) writes:
> In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
> >And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
> >probably because it didn't in fact have the capability of supporting VM.
> 
> You don't give the criteria by which you make this rather bald claim, but I'll
> try to respond to it anyway.

Virtual memory == ability for a task to run without its entire address space
residing in primary memory, in a manner that is transparent to the task
itself. If you allow the task to manage the "VM", then you're letting
overlays in and anything with secondary memory qualifies. This implies
that it should be able to recover from a page fault.

Now, someone else has claimed that the PDP-11 in some incarnations was able to
do this. Since they seem to know whereof they speak, I'll accept it. But I
stand on my claim that PDP11 does nore imply VM.

What you're talking about...

> The test for virtual memory is whether the name space of the process is
> independent of the name space of the processor; that is, the memory seen by
> the process is the same, no matter where it is located in physical (real)
> memory.

Is mapped memory (at least that's what DEC calls it, and it's a reasonable
description. Especially since we're talking about DEC processors.

> BTW, I'm assuming that you meant "interactive" when you said "real-time,"
> since in an actual hard-real-time environment, I would want my tasks running
> with the memory management turned off and in a soft-real-time environment, I
> would want my tasks locked down.  In either case, the overhead for virtual
> memory would be similar and the faster processor would win.

Yeh, yeh. For hard real-time I wouldn't want anything more complicated in the
way of operating systems than a scheduler. Down here in Houston real-time
generally refers to SCADA, which makes most people's "soft realtime" look
like granite.
-- 
-- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
--                 'U`  Have you hugged your wolf today?
-- Disclaimer: These aren't mere opinions... these are *values*.

ejbjr@ihlpg.ATT.COM (Branagan) (09/28/87)

> > >....  The PDP11 didn't have virtual memory either,
> > >if my memory serves me.  ....
> > 
> > At the risk of re-opening an old debate, the PDP-11 \does/ have virtual
> > memory.  It's just that, for various technical reasons, the original Unix
> > implementation for it chose to use swapping instead of paging as its virtual
> > memory technique.
> 
> And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
> probably because it didn't in fact have the capability of supporting VM.
> Why do you think DEC developed the Virtual Address Extension (VAX) in the
> first place?

Virtual memory != Support for demand paging

The PDP-11 (later models only) does have virtual memory (i.e. supports
a translation from logical to physical address).  This was necessary to
support more than 64K of physical memory on a machine only capable of
addressing 64K - sort of backwards from the typical situation of using
virtual memory to support processes which use more memory than physically
available on the machine (or to the process).

The PDP-11 does not have support for paging (i.e. does not generate
convienient hardware interupts allowing a page to be brought into memory).
Actually the problem here is that it is difficult to figure out how much
of the instruction that faulted has executed already, and completing
execution of the instruction which generated the fault (I've seen this
done by interpreting the instruction in software, but it is very slow
and gross).  It looks like the designers made an attempt to add paging
capabilities, but it was very difficult without major design and microcode
changes (which lead to the VAX 11/780).

SUMMARY:

	The PDP-11 (many models) does support virtual memory, though
	not for the same purposes as most machines.  The PDP-11 has very
	poor support for paging, but it is theoretically (not practically)
	possible.
-- 
-----------------
Ed Branagan
ihnp4!ihlpg!ejbjr
(312) 369-7408 (work)

gwyn@brl-smoke.ARPA (Doug Gwyn ) (09/28/87)

In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>probably because it didn't in fact have the capability of supporting VM.

Greg Noel was right -- the PDP-11 did support virtual memory.
Basically, three things are necessary for full support of demand-paged
virtual memory:  mappable per-process virtual address space pages,
generation of a trap when a reference is made to an unmapped page,
and ability to restart the faulted instruction after changing the map.
All these facilities existed on high-end PDP-11 models with KT11 MMUs.
(The low-end models did not have such a memory management unit.)

As others have pointed out, there were experimental virtual memory
implementations of PDP-11 UNIX.

>Why do you think DEC developed the Virtual Address Extension (VAX) in the
>first place?

To get a larger per-process address space!

>You can run way more users on and get way better real-time response
>from a PDP 11/70 than any VAX you care to name.

Both 22-bit PDP-11s and VAXes have to perform two levels of address
translation.  Some VAXes provide poorer peripheral I/O paths, but
apart from that there is little difference in their real-time abilities.

henry@utzoo.UUCP (Henry Spencer) (09/29/87)

> The fork call is going to be awefully expensive, as a copy of the
> entire data space of the parent will be made...

Nonsense, not on a sensibly implemented system (4BSD as shipped from
Berkeley does not count, although in fairness I should say that this one
is not entirely their fault -- hardware bugs got in their way).  It just
has to *look* like a copy is made.  Any half-decent virtual-memory system
will do this with copy-on-write, so the bulk of the data never gets copied
and the fork is cheap.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

ron@topaz.rutgers.edu (Ron Natalie) (09/29/87)

Actually, the PDP-11 has a perfectly good interrupt for bringing in
pages.  UNIX uses it to know when to grow the stack.  The problem is
that there are only eight (or 16 segments) to memory, so paging doesn't
get you a whole lot.  In addition, the shell, does user mode page faults
to know when to grow its heap.

-Ron

dave@sdeggo.UUCP (David L. Smith) (09/29/87)

In article <6488@brl-smoke.ARPA>, gwyn@brl-smoke.ARPA (Doug Gwyn ) writes:
> In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
> >You can run way more users on and get way better real-time response
> >from a PDP 11/70 than any VAX you care to name.
> 
> Both 22-bit PDP-11s and VAXes have to perform two levels of address
> translation.  Some VAXes provide poorer peripheral I/O paths, but
> apart from that there is little difference in their real-time abilities.

There may be little difference in their actual processor speeds, but as
far as actual response, it's pretty incredible.  We used to run 30 users
on a PDP-11/70 with 512K (bytes!) of memory and an RM03.  No shared libraries,
no shared executables, each user had their own task.  However, since the
swap size is so small (64 or 128K) you can swap them in and out without
hardly even noticing it.  It would have really flown with a full 4M.
Heck, I used to run real-time terminal programs written in interpreted
BASIC-Plus and have them work (but, there couldn't be much of anything else
on the system).



-- 
David L. Smith
{sdcsvax!amos,ihnp4!jack!man, hp-sdd!crash, pyramid}!sdeggo!dave
sdeggo!dave@amos.ucsd.edu 
"How can you tell when our network president is lying?  His lips move."

tim@ism780c.UUCP (Tim Smith) (09/30/87)

In article <840@usfvax2.UUCP> chips@usfvax2.UUCP (Chip Salzenberg) writes:
< For example, I use an editor that forks twice as preparation for running
< an area of text through a filter; that really _is_ expensive, since our editor
< is quite large.  (No, not emassive :-}; the Rand editor `E'.)

Forking does not have to be expensive for large programs.  Fork should
just increase the reference counts on the text pages and data pages, and
change all data pages to be read-only.  Only copy pages that get a write
fault.
-- 
Tim Smith, Knowledgian		{sdcrdcf,uunet}!ism780c!tim
				tim@ism780c.isc.com
"Oh I wish I were Matthew Wiener, That is who I truly want to be,
 'Cause if I were Matthew Wiener, Tim Maroney would send flames to me"

kent@tifsie.UUCP (Russell Kent) (09/30/87)

in article <6476@brl-smoke.ARPA>, gwyn@brl-smoke.ARPA (Doug Gwyn ) says:
> Xref: tifsil comp.arch:729 comp.unix.wizards:1184 comp.os.minix:492
> 
> In article <7672@felix.UUCP> preston@felix.UUCP (Preston Bannister) writes:
>>The fork call is going to be awefully expensive, as a copy of the
>>entire data space of the parent will be made.
> 
> It need not be; don't copy the pages until a attempt to modify them occurs.

Unfortunately, the machines that MINIX currently runs on do not
have the memory management hardware to support the copy-on-write
interrupt (ala DEC VAX processors).  Nice try though :-)

-- 
Russell Kent			Phone: +1 214 995 3501
Texas Instruments - MS 3635	Net mail:
P.O. Box 655012			...!{ihnp4,uiucdcs}!convex!smu!tifsie!kent	
Dallas, TX 75265		...!ut-sally!im4u!ti-csl!tifsie!kent

chips@usfvax2.UUCP (09/30/87)

In article <15124@topaz.rutgers.edu>, Ron Natalie writes:
> In addition, the shell does user mode page faults
> to know when to grow its heap.
> 
> -Ron

Don't I know it!  I tried to port the Bourne shell (v7) to a 68000 box running
UNOS, which was Charles River Data System's answer to UNIX.  UNOS was no
problem, but since the 68000 doesn't know how to restart instructions that
cause bus faults, I *never* got sh to work reliably.  (People used it anyway,
which shows you how bad CRDS's command interpreter was!)

The C shell, on the other hand, worked right out of the box.  (Thanks, Bill!)
Of course, I had to adapt for UNOS job control -- wait() could return the same
pid any number of times as processes were suspended and restarted -- but that's
peanuts in the porting game.  :-)

-- 
Chip Salzenberg            "chip%ateng.uucp@UU.NET"  or  "uunet!ateng!chip"
A.T. Engineering, Tampa    CIS: 73717,366    Last Chance: "chips@usfvax2.UUCP"
"Use the Source, Luke!"    My opinions do not necessarily agree with anything.

elg@usl (Eric Lee Green) (10/02/87)

>>In article <1745@ncr-sd>, greg@ncr-sd (Greg Noel) writes:
>>> ... the PDP-11 \does/ have virtual
>>> memory.  It's just that, for various technical reasons, the original Unix
>>> implementation for it chose to use swapping instead of paging as its virtual
>>> memory technique.

I recently purchased the Bach book. The history he gives indicates
that the probable reason Unix originally used swapping instead of
paging, is because the early models of the PDP-11 that they originally
implemented Unix on, did not support paging. In any event, you COULD
run more programs than you had physical memory for, so I'd say that's
virtual memory.

--
Eric Green  elg@usl.CSNET       from BEYOND nowhere:
{ihnp4,cbosgd}!killer!elg,      P.O. Box 92191, Lafayette, LA 70509
{akgua,killer}!usl!elg        "there's someone in my head, but it's not me..."

jfh@killer.UUCP (10/03/87)

In article <819@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> In article <1745@ncr-sd>, greg@ncr-sd (Greg Noel) writes:
> > In article <8490@think.UUCP> rlk@THINK.COM writes:
> > >....  The PDP11 didn't have virtual memory either,
> > >if my memory serves me.  ....
> > 
> > At the risk of re-opening an old debate, the PDP-11 \does/ have virtual
> > memory.  It's just that, for various technical reasons, the original Unix
> > implementation for it chose to use swapping instead of paging as its virtual
> > memory technique.
> 
> And neither did any other operating system for the PDP-11 (RSX, RSTS, RT-11),
> probably because it didn't in fact have the capability of supporting VM.
> Why do you think DEC developed the Virtual Address Extension (VAX) in the
> first place?
> 
> > Yes, it's a nit, but the PDP-11 is a fine machine, and deserves to be
> > remembered correctly.
> 
> As a great little non-virtual system. Nothing wromng with that. Sometimes
> virtual memory means virtual performance, as a good many PDP-11 fans have
> pointed out. You can run way more users on and get way better real-time response
> from a PDP 11/70 than any VAX you care to name.
> -- 
> -- Peter da Silva `-_-' ...!hoptoad!academ!uhnix1!sugar!peter
> --                 'U`  Have you hugged your wolf today?
> -- Disclaimer: These aren't mere opinions... these are *values*.

No, the registers and micro-code were present in _certain_ PDP-11's to handle
virtual memory.  I believe that certain micro-Vaxen _didn't_ actually have
enough support to handle virtual memory as well as, say, a PDP-11/70.

All that is required to support virtual memory is the ability to generate
a CPU trap/fault on a reference to a non-existence/non-resident page of
memory, and then figure out how to restart the intruction so that it
completes as if the page were resident.

Might sound a bit simple, but for all it's greatness, the 68000 and 808[86]
can't handle virtual memory because in the general case, an instruction can
not be restarted.

The PDP-11's of the larger type (44/45/70/73/(34?)) could support virtual
memory if the memory management hardware had been installed.  When a
non-resident segment (8 8KB segments for instruction and 8 more for data,
except the 34, which didn't support separted I&D) was referenced, a trap
was generated and the address of the instruction and PSW were stacked.

The operating system then needed to simulate the rest of the instruction,
if the failing cycle was a write.  In the case of a read, you could make
the segment valid (load the MMU's and what not) if no registers had been
modified during the instruction, or attempt to simulate the instruction.

Whew!  Needless to say, I'd much rather swap than go through that mess.
Anyway, Unix on 11's did do some neat things.  I seem to recall that
a trap on a write into a deeper stack address caused the system to backup
that instruction and expand the stack.  This is the suggested use for the
direction bit in the MMU, according to my old '45 manual.  If I remember
from the days of wierd gone by, we had a C compiler that on entry to a
function generated a jump past the jsr pc,cret which was a jump back to
the instruction after the first jump.  I suppose this `feature' could have
been used to demand page text in, had anyone bothered.

Anyhow, go read a _modern_ (or even old, like 11/45) manual and you too
will become a believer...

- John.
-- 
John F. Haugh II		HECI Exploration Co. Inc.
UUCP:	...!ihnp4!killer!jfh	11910 Greenville Ave, Suite 600
"Don't Have an Oil Well?"	Dallas, TX. 75243
" ... Then Buy One!"		(214) 231-0993

johnson@uiucdcsp.cs.uiuc.edu (10/03/87)

>Virtual memory == ability for a task to run without its entire address space
>residing in primary memory, in a manner that is transparent to the task
>itself.

This answer would be marked as incorrect in the O.S. class that I teach.

> The test for virtual memory is whether the name space of the process is
> independent of the name space of the processor; that is, the memory seen by
> the process is the same, no matter where it is located in physical (real)
> memory.

This answer would be judged correct.

I suppose that either answer could serve as a definition for virtual memory,
but you have to pick one and use it.  Perhaps it depends on the text book
that you use.  The definition that you use determines whether you think
that the PDP-11 has virtual memory or not.

guy%gorodish@Sun.COM (Guy Harris) (10/04/87)

> I recently purchased the Bach book. The history he gives indicates
> that the probable reason Unix originally used swapping instead of
> paging, is because the early models of the PDP-11 that they originally
> implemented Unix on, did not support paging.

I would be very surprised if this were the case.  The PDP-11/20 that was the
first PDP-11 on which UNIX was implemented (the first *machine* it was
implemented on was a PDP-7), and it didn't have an MMU of any sort.  The
PDP-11/45, which was the second PDP-11 on which UNIX was implemented, did,
however, have sufficient support for paging.  See John Mashey's article,
discussing paging UNIX on a PDP-11/55 (which was an 11/45 with bipolar memory
hung off a fast memory bus).  That article also cites what is probably the
*real* reason UNIX didn't use paging; it added complexity and didn't buy you
much at all.

Some of the later PDP-11s that UNIX was ported to had an MMU but may or may not
have been able to support demand paging; the 11/45's MMU included a register
that, by recording changes made to registers by auto-increment and
auto-decrement addressing modes, permitted the OS to back up an instruction
that faulted in midstream.  Other PDP-11s, however, did not; a backup routine
that was sufficient to handle the case of faults taken by references past the
end of the stack was provided for those machines.  I don't think this routine
was sufficient to handle *all* the cases of faults taken in midstream, at least
not in a machine-independent manner.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

guy%gorodish@Sun.COM (Guy Harris) (10/04/87)

> I believe that certain micro-Vaxen _didn't_ actually have enough support to
> handle virtual memory as well as, say, a PDP-11/70.

Micro-*VAXen*?  As far as I know, the MMU on all VAXes is the same.  Did you
mean micro-PDP-11s?

> Might sound a bit simple, but for all it's greatness, the 68000 and 808[86]
> can't handle virtual memory because in the general case, an instruction can
> not be restarted.

Well, there *are* ways of doing virtual memory without that kind of support; I
think both Apollo and Masscomp did it with two 68000s, one of which was halted
in mid-bus-cycle when it referenced an invalid page.  The other one would then
wake up and fetch the page, and let the original one continue its bus cycle
when the page had been read in.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

henry@utzoo.UUCP (Henry Spencer) (10/07/87)

> ... Other PDP-11s, however, did not; a backup routine
> that was sufficient to handle the case of faults taken by references past the
> end of the stack was provided for those machines.  I don't think this routine
> was sufficient to handle *all* the cases of faults taken in midstream, at
> least not in a machine-independent manner.

Correct.  There were cases that simply could not be handled at all:  when
the same instruction modified the same register twice, in general one could
not tell how many modifications had been done.  The compiler largely avoided
such instructions, fortunately.  Making this work for floating point (another
case where registers could get modified repeatedly, since e.g. pushing a
double onto the stack required four one-word push operations) had at least
the potential of requiring model-specific code; I don't know whether this
was ever actually necessary, or whether the processors in question were all
sufficiently similar to avoid it.  I worked on dumb-MMU 11s and floating-
point 11s, but never seriously on a machine that was both.

(In case anyone is interested, the way the MMU differences came about was
that the 11/45 MMU, the first for the 11 series, was kind of a kitchen-sink
design with all sorts of vaguely useful-looking things.  For the mid-range
11/40 a bit later, DEC simplified the design by taking out all the things
that they had never bothered to use.  Unfortunately, some of those things
*were* in use by others.  They eventually more-or-less fixed this; the 11/44
and [I think] all the 11s that followed it have an MMU that is only a slight
subset of the 11/45 one, omitting only a few features that are *truly* more
or less useless.)
-- 
PS/2: Yesterday's hardware today.    |  Henry Spencer @ U of Toronto Zoology
OS/2: Yesterday's software tomorrow. | {allegra,ihnp4,decvax,utai}!utzoo!henry

mike@turing.unm.edu.unm.edu (Michael I. Bushnell) (10/12/87)

In article <6488@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
~In article <819@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
~>Why do you think DEC developed the Virtual Address Extension (VAX) in the
~>first place?
~
~To get a larger per-process address space!

Perhaps it has something to do with the fact that VAX stands for VAST
Address Extension?  Or so I was told.


					Michael I. Bushnell
					a/k/a Bach II
					mike@turing.unm.edu
					{ucbvax,gatech}!unmvax!turing!mike
---
Tex SEX!  The HOME of WHEELS!  The dripping of COFFEE!!  Take me
 to Minnesota but don't EMBARRASS me!!
				-- Zippy the Pinhead