[comp.sys.amiga] Multitasking: Amiga vs. Unix, and apple comments

michael@stb.UUCP (Michael) (01/01/70)

Someone asked for an explanation of why Amiga's multitasking system is
better than unix's, and made the comment about poor performance for interactive
programs on unix, esp. on demand paging.

#1. Priority.
Unix goes out of its way to give everyone sometime unless the system is totally
hogged. Amiga, in contrast, has fixed priority levels--if you are at a higher
priority, you get all the time you want. Basically, its like saying "Nice jobs
finish last, er, niced jobs get no time unless no one else wants it", as
opposed to "Niced jobs get 80% of what they would have".

#2. Demand paging.
This is good only for one thing: Letting a large program run on a small machine.
If an interactive program has to wait while a background task runs and pages
in, that program is very slow. The Amiga does not suffer from this mixed 
blessing.

Personally, I think this could be improved by giving more priority (read:
resources) to an interactive task even when its not running. Unix is
notoriously bad with interactive tasks--esp. on swapping systems where as
soon as it does the read(), it gets swapped. (You thought paging was bad?
Swapping seems worse). The Amiga does not swap, so it can give good response
time.

#3. Two tasks competing for time.
<Sigh>. Ok, so no scheme is perfect. Yes, with two tasks trying to run
at the same time, it will be twice as slow. Yes, everysystem will have this
problem if they are at equal priority. No they do not have to be at equal
priority. 

The trick is to give the more interactive one higher priority (unix does),
and give higher priority ones more resources (unix does NOT). The Amiga
gives higer priority processes more resources, but only because the only
one it gives is CPU time--memory, etc. goes to whoever asks first.

Now for the Mac. Granted, a well written program that gives up control
at various points can behave just as well as a pre-emptive timeslice
system. However,
A) It is easier to write for pre-emptive systems.
B) A program that has a bug in it (lets face it, they all do) may never
get around to giving up control when you want it to.
C) There is inherent unfairness in it as two different programs will not
give up control at the same rate; so, one program will get more time than
the other. This is decided by the program author, not by the user.

My own opinion is that we need something inbetween the two. On Unix, all
programs get CPU time, even when niced. I've had programs run at nice
of +20 on non-paging systems (they run just fine when other people are
doing disk io, and they still get good time on their own IO), even when
I put them there because I wanted them to WAIT. On the other hand, a
low priority program on the Amiga (And unix, if the load is high) will not
even see a signal sent to them.

What we need is a "nice"'ing scheme sort of in the middle, where a little
niceness does make a noticable difference in performance, and also 
in I/O speed. But at the same time, if you get a signal/message/whatever,
you should be guaranteed CPU time quickly to respond to it.

			Michael
-- 
: Michael Gersten		seismo!scgvaxd!stb!michael
: Copy protection? Just say Pirate! (if its worth pirating)

mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) (08/23/87)

In article <56@stb.UUCP> michael@.UUCP (Michael) writes:
<#2. Demand paging.
<This is good only for one thing: Letting a large program run on a small machine.

No, it's also good for having a set of runnable programs whose total
size is larger than real memory. The net result of this is better
performance than either a swapping system or a system without that
feature under those conditions. Of course, you get slightly worse
performance if your load isn't that high.

<Personally, I think this could be improved by giving more priority (read:
<resources) to an interactive task even when its not running. Unix is
<notoriously bad with interactive tasks--esp. on swapping systems where as
<soon as it does the read(), it gets swapped.

Time for "History of Unix." Unix was originally rewritten in C for a
system with few users on at any give time. It had a nice, simple
scheduler to go with that. This is the scheduler that was in v6, v7,
PWB 1.0, PWB 2.0, SysIII, SysV (throught at least r2), and BSD through
2.7 and 4.1. As you pointed out, interactive tasks suffer badly in the
presence of background CPU hogs.

Lots of people know this. George Goble at Purdue was one of them. He
rewrote that scheduler and installed it in his v7 systems. That's the
scheduler that is in 4.2, 4.3, 2.8, 2.9 and 2.10 BSD.

It doesn't really go far enough. If the number of CPU hogs is much
greater than the number of interactive jobs (say, 50% more) then the
interactive jobs start suffering again. There is a single number (4)
in this scheduler that can be changed (make it 8) to fix the problem.
But that just pushes the limit up, and makes all those background jobs
suffer. Also, it's not an easily tunable number; it's a magic number
in kern_clock.c

There are people experimenting with schedulers that track different
types of resources (disk, tty, memory, cpu, etc) and computes a
weighted average, where the weights are settable by a system call.
Hopefully, this will wind up in 4.4, if not SysVr4.

<#3. Two tasks competing for time.
<<Sigh>. Ok, so no scheme is perfect. Yes, with two tasks trying to run
<at the same time, it will be twice as slow.

Key words "trying to run." Fortunately, most tasks don't try to run
all the time; some of them very little of the time. You only get
"twice as slow" if you've got two cpu hogs. Normal tasks (like 9 of
the 10 I've got running) spend a fair amount of time waiting for disk,
or the user, or whatever.

<The trick is to give the more interactive one higher priority (unix does),
<and give higher priority ones more resources (unix does NOT).

Well, we had a v6 system with what we called the "walk on water" hack
(from that very same George Goble). If your nice was better than -5,
any requests for disk IO went straight to the head of the queue.

This eventually degenerates into first-come-first-served disk scheduling,
which defeats the purpose of disk scheduling. While it's fairly
obvious that user-settable priorities will do a good job (maybe not
great, but good) of CPU scheduling for a single-user system, it's not
at all clear that there is any such disk scheduling strategy. I'd like
to know what AmigaDOS is doing, though.

	<mike
--
Round about, round about, in a fair ring-a.		Mike Meyer
Thus we dance, thus we dance, and thus we sing-a.	mwm@berkeley.edu
Trip and go, to and fro, over this green-a.		ucbvax!mwm
All about, in and out, over this green-a.		mwm@ucbjade.BITNET

peter@sugar.UUCP (Peter da Silva) (08/28/87)

In article <56@stb.UUCP>, michael@stb.UUCP (Michael) writes:
> Someone asked for an explanation of why Amiga's multitasking system is
> better than unix's, and made the comment about poor performance for
> interactive programs on unix, esp. on demand paging.

It's not better. It's different. UNIX is optimised to give good response
time to a large number of interactive terminals. The Amiga's scheduler
is optimised to give immediate response to real-time events. The Amiga
is not a multiuser system, and probably can't be. UNIX is not a real-time
system, but you can change the scheduler to make it one. AT&T uses real-time
UNIX all over the phone system.

> #1. Priority.
> Unix goes out of its way to give everyone sometime unless the system is totally
> hogged.

which is what you want in a timesharing environment, and isn't too bad in a
low-grade real-time one.

> Amiga, in contrast, has fixed priority levels--if you are at a higher
> priority, you get all the time you want. Basically, its like saying "Nice jobs
> finish last, er, niced jobs get no time unless no one else wants it", as
> opposed to "Niced jobs get 80% of what they would have".

I'd rather have niced jobs get 80% of expected time. That's what nice is for...
to allow you to be nice to other users without keeping your job from
running at all. Nice is for CPU hogs who want even less of the system than
they would normally get.

> #2. Demand paging.
> This is good only for one thing: Letting a large program run on a small machine.
> If an interactive program has to wait while a background task runs and pages
> in, that program is very slow.

Because of the transparent asynchronous I/O shared by the Amiga and UNIX,
paging in another program does not make you wait. The only time you have
to wait for paging is when you're the one being paged. Isn't that better
to not being able to run at all?

> The Amiga does not suffer from this mixed blessing.

If you never run more programs than you have memory, the overhead for a paged
system is miniscule. If you need to run more programs than you have memory,
isn't it better to have the option?

If they don't get in your way, having more options is never a defect.

My personal experience is that UNIX pages quickly and unobtrusively in a
lightly loaded environment. You obviously have used a heavily loaded system.
UNIX handles increasing load in such a nice way that there is this tendency
to just keep pouring on the users. It's like an Italian 12-cylinder engine:
there always seems to be more RPM out there. Unfortunately, computer centers
don't have RPM limiters.

> The Amiga does not swap, so it can give good response time.

UNIX does not swap (or page) under the same circumstances. Try playing with
an AT running Microport System V with 5 MEG of RAM some time.

> programs get CPU time, even when niced. I've had programs run at nice
> of +20 on non-paging systems (they run just fine when other people are
> doing disk io, and they still get good time on their own IO), even when
> I put them there because I wanted them to WAIT.

That's not what nice is for. The best way of getting a UNIX program to wait
when it's not expecting to have to is to convince it to read from your
terminal. Or switch to BSD UNIX and send a SIGSTOP.

> What we need is a "nice"'ing scheme sort of in the middle, where a little
> niceness does make a noticable difference in performance, and also 
> in I/O speed. But at the same time, if you get a signal/message/whatever,
> you should be guaranteed CPU time quickly to respond to it.

This is basically what UNIX gives you. Its just that UNIX does such a
good job of resource *sharing* that even a major drop in priority doesn't
stop you dead in your tracks.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                  U   <--- not a copyrighted cartoon :->

michael@stb.UUCP (Michael) (08/31/87)

In article <582@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>In article <56@stb.UUCP>, michael@stb.UUCP (Michael) writes: (me)
>> Someone asked for an explanation of why Amiga's multitasking system is
>> better than unix's, and made the comment about poor performance for
>> interactive programs on unix, esp. on demand paging.
>
>It's not better. It's different. UNIX is optimised to give good response
>time to a large number of interactive terminals. The Amiga's scheduler

No, unix is optimized to give good throughput on CPU hogs and resonable
interactive response time for lightly loaded systems. If you've modified
your scheduler/swapper/pager, then this may differ.

>> #1. Priority.
>> Unix goes out of its way to give everyone sometime unless the system is totally
>> hogged.
>
>which is what you want in a timesharing environment, and isn't too bad in a
>low-grade real-time one.

Not if you want interactive tasks to get more priority (which I do)

>> Amiga, in contrast, has fixed priority levels--if you are at a higher
>> priority, you get all the time you want. Basically, its like saying "Nice jobs
>> finish last, er, niced jobs get no time unless no one else wants it", as
>> opposed to "Niced jobs get 80% of what they would have".
>
>I'd rather have niced jobs get 80% of expected time. That's what nice is for...
>to allow you to be nice to other users without keeping your job from
>running at all. Nice is for CPU hogs who want even less of the system than
>they would normally get.

Niced jobs get 80% of if they were not niced. I'd like to see that figure
lower, say 25-30%.

>> #2. Demand paging.
>> This is good only for one thing: Letting a large program run on a small machine.
>> If an interactive program has to wait while a background task runs and pages
>> in, that program is very slow.
>
>Because of the transparent asynchronous I/O shared by the Amiga and UNIX,
>paging in another program does not make you wait. The only time you have
>to wait for paging is when you're the one being paged. Isn't that better
>to not being able to run at all?

What I meant was, the interactive task stops for I/O, gets swapped or paged
out, and then has a slow swap in.

And on my system, when the swapper runs, NO ONE ELSE gets any I/O to that
device. Since my /dev/swap is permenently on the end of /dev/root, that
means any access to /bin or /tmp gets blocked.

>> The Amiga does not suffer from this mixed blessing.
>
>If you never run more programs than you have memory, the overhead for a paged
>system is miniscule. If you need to run more programs than you have memory,
>isn't it better to have the option?
>
>If they don't get in your way, having more options is never a defect.

The Hardware support for paging is a 10% overhead even when you don't
need to page. This is the overhead for the TLB (Tomato, lettuce, bacon)
(Translation lookaside buffer) which is a cache for storing the address
translation table.

>My personal experience is that UNIX pages quickly and unobtrusively in a
>lightly loaded environment. You obviously have used a heavily loaded system.

Yes, I've used UCLA and a home computer (68000, 512K originally, now 1meg.)

>> The Amiga does not swap, so it can give good response time.
>
>UNIX does not swap (or page) under the same circumstances. Try playing with
>an AT running Microport System V with 5 MEG of RAM some time.

Gee, if only I could get 5meg on my system. But it was designed in 1982.

>> programs get CPU time, even when niced. I've had programs run at nice
>> of +20 on non-paging systems (they run just fine when other people are
>> doing disk io, and they still get good time on their own IO), even when
>> I put them there because I wanted them to WAIT.
>
>That's not what nice is for. The best way of getting a UNIX program to wait
>when it's not expecting to have to is to convince it to read from your
>terminal. Or switch to BSD UNIX and send a SIGSTOP.

Does anyone make BSD for a TRS-80 16? I didn't think so. But I can't modify
programs I didn't write, and I will not modify a program that normally
doesn't need I/O too need I/O.

>> What we need is a "nice"'ing scheme sort of in the middle, where a little
>> niceness does make a noticable difference in performance, and also 
>> in I/O speed. But at the same time, if you get a signal/message/whatever,
>> you should be guaranteed CPU time quickly to respond to it.
>
>This is basically what UNIX gives you. Its just that UNIX does such a
>good job of resource *sharing* that even a major drop in priority doesn't
>stop you dead in your tracks.

Let me try again:

If you have 4 CPU hogs, a niced (+5) job will never run.
If you have 3 CPU hogs, a niced (+20) job will never run.
If one of those 3 decides to do I/O, the niced +20 job will swap in and
swap it out.
Suddenly, your interactive speed drops to the speed of trashing.

Incidently, if a job is niced +20, and 3 CPU hogs exist, that niced job
will never see a signal. Not even a kill -9.

				Michael
Unix quotes are based on a V7 swapping system with 512K memory. 1meg on
sys3 helps, but not much (I still swap even when only 400-600K of my
870K user memory gets used.)
-- 
: Michael Gersten		seismo!scgvaxd!stb!michael
: Copy protection? Just say Pirate! (if its worth pirating)

peter@sugar.UUCP (Peter da Silva) (09/03/87)

In article <93@stb.UUCP>, michael@stb.UUCP (Michael) writes:
> In article <582@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
> >It's not better. It's different. UNIX is optimised to give good response
> >time to a large number of interactive terminals. The Amiga's scheduler
> 
> No, unix is optimized to give good throughput on CPU hogs and resonable
> interactive response time for lightly loaded systems. If you've modified
> your scheduler/swapper/pager, then this may differ.

That's an interesting statement, given that the whole design of the UNIX
scheduler is oriented towards giving good throughput on a large number of
interactive terminals. It also gives adequate response time.

> >which is what you want in a timesharing environment, and isn't too bad in a
> >low-grade real-time one.
> 
> Not if you want interactive tasks to get more priority (which I do)

Interactive tasks do get more priority. If a process does not use all of its
time slice, it's priority is raised. Interactive tasks don't tend to use
all of their time-slice.

> >I'd rather have niced jobs get 80% of expected time. That's what nice is for...
> >to allow you to be nice to other users without keeping your job from
> >running at all. Nice is for CPU hogs who want even less of the system than
> >they would normally get.
> 
> Niced jobs get 80% of if they were not niced. I'd like to see that figure
> lower, say 25-30%.

Why? I really don't understand your point here. If they can still run 80% of
the time they would otherwise (presumably because your task is waiting on
I/O), why shouldn't they?

> >Because of the transparent asynchronous I/O shared by the Amiga and UNIX,
> >paging in another program does not make you wait. The only time you have
> >to wait for paging is when you're the one being paged. Isn't that better
> >to not being able to run at all?
> 
> What I meant was, the interactive task stops for I/O, gets swapped or paged
> out, and then has a slow swap in.

OK. That's a different matter. Still... isn't that better than not being able
run at all? If not, why did you start that CPU-hog in the first place?

> And on my system, when the swapper runs, NO ONE ELSE gets any I/O to that
> device. Since my /dev/swap is permenently on the end of /dev/root, that
> means any access to /bin or /tmp gets blocked.

Your system is a TRS-80 model 16. I evaluated that machine and decided that
it was basically inadequate for real work because of poor hardware design
and bad choices in the UNIX port. I don't recall if this was part of the
poor choices, but I wouldn't be surprised.

> >If they don't get in your way, having more options is never a defect.
> 
> The Hardware support for paging is a 10% overhead even when you don't
> need to page. This is the overhead for the TLB (Tomato, lettuce, bacon)
> (Translation lookaside buffer) which is a cache for storing the address
> translation table.

Always a flat 10%, right? If the memory is fast enough the CPU will see no
overhead. After all, the Amiga has up to a 100% overhead for the custom
chips... you just don't see it because it's going on when the CPU couldn't
use the bus anyway.

> >My personal experience is that UNIX pages quickly and unobtrusively in a
> >lightly loaded environment. You obviously have used a heavily loaded system.
> 
> Yes, I've used UCLA and a home computer (68000, 512K originally, now 1meg.)

A 68000 with a meg should be a pretty lightly loaded home computer. We used
an LSI-11/23 with a meg. The LSI-11/23 is by no means as fast a processor as
the 68000. Yet, unless someone was doing a virtual link the response time was
very good.

> >> The Amiga does not swap, so it can give good response time.
> >
> >UNIX does not swap (or page) under the same circumstances. Try playing with
> >an AT running Microport System V with 5 MEG of RAM some time.
> 
> Gee, if only I could get 5meg on my system. But it was designed in 1982.

OK... try running your system with no more programs loaded than will fit into
memory. This should effectively mimic an Amiga with a Meg. Then decide whether
you would rather have the option of running another program at a lower speed
or not.

> >when it's not expecting to have to is to convince it to read from your
> >terminal. Or switch to BSD UNIX and send a SIGSTOP.
> 
> Does anyone make BSD for a TRS-80 16? I didn't think so. But I can't modify
> programs I didn't write, and I will not modify a program that normally
> doesn't need I/O too need I/O.

Put a little routine in to catch signal 16 and wait until it's gotten another
signal. Then, when you want it to wait do a "kill -16 <pid>". Do I have to
do all your thinking for you? It shouldn't be too difficult to patch in the
extra instructions to do this. Then do a "#define SIGSTOP SIGUSER1"

> Let me try again:
> 
> If you have 4 CPU hogs, a niced (+5) job will never run.
> If you have 3 CPU hogs, a niced (+20) job will never run.
> If one of those 3 decides to do I/O, the niced +20 job will swap in and
> swap it out.
> Suddenly, your interactive speed drops to the speed of trashing.

Typically, your CPU hog is the one you want to nice.

So, why are you running more than 1 CPU hog on a trash-80? Your problem is
that you're way overdriving your machine. Version 7 is normally a very
nice operating system. Much more comfortable in small quarters than any
other I've ever used. Once again... UNIX has fooled you into thinking it
will always be able to take on more work. Unfortunately... in the real
world... that ain't the case.

> Incidently, if a job is niced +20, and 3 CPU hogs exist, that niced job
> will never see a signal. Not even a kill -9.

At least until the CPU hogs do I/O. Why aren't you nicing the hogs?

> Unix quotes are based on a V7 swapping system with 512K memory. 1meg on
> sys3 helps, but not much (I still swap even when only 400-600K of my
> 870K user memory gets used.)

You don't have system 3. Xenix is based entirely on version 7. Microsoft
added a bunch of stuff to make it look like S3. Also, the TRS-80 model 16
has a totally inadequate MMU, which is possibly why you're seeing it swap
so easily.  They probably put in a lot of firewall space to allow for the
lack of hardware memory protection (!).

I'm sorry you got stuck with the machine you did. What are you using it for,
maybe I can make suggestions to improve its performance.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                  U   <--- not a copyrighted cartoon :->

czei@osupyr.UUCP (Michael S Czeiszperger) (09/04/87)

In article <582@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>UNIX is not a real-time
>system, but you can change the scheduler to make it one. AT&T uses real-time

Do you, or anyone else know where to get the reference materials needed
to accomplish this?   Or for that matter, information about exactly how
the kernal works.  Book suggestions are welcome.


Michael S. Czeiszperger           | Disclaimer: "Sorry, I'm all out of pith" 
Sound Synthesis Studios           | Snail: Room 406 Baker     Phone: (614)
College of the Arts Computer Lab  |        1971 Neil Avenue            292-
The Ohio State University         |        Columbus, OH 43210           0895
UUCP : {decvax,ucbvax}!cbosgd!osupyr!czei