[comp.arch] Macintosh OS

ck@voa3.UUCP (Chris Kern) (05/25/90)

For those of use who don't have any experience with the Mac OS, could
someone explain what its deficiencies are with respect to multitasking?
Is it that the OS doesn't do time-slicing?  Does a context switch have
to wait for some event at the application level?  How is multitasking
implemented?
-- 
Chris Kern			     Voice of America, Washington, D.C.
...uunet!voa3!ck					+1 202-619-2020

jdarcy@zelig.encore.com (Mostly Harmless) (05/26/90)

ck@voa3.UUCP (Chris Kern):
> For those of use who don't have any experience with the Mac OS, could
> someone explain what its deficiencies are with respect to multitasking?
> Is it that the OS doesn't do time-slicing?  Does a context switch have
> to wait for some event at the application level?  How is multitasking
> implemented?

Apple's term for it is "cooperative multitasking".  Most Mac applications
spend the bulk of their time in an event loop, in which they repeatedly
call GetNextEvent() until something happens and then they branch all over
the place according to the event type.  What MultiFinder basically does
is steal control from the running application on the third unsuccessful
return from GetNextEvent.  Thus you are correct: context switching is in
fact done only by the (implied) consent of the current program.  A program
that gets stuck in a tight infinite loop and that never calls GetNextEvent
will hang the system.

Most people are probably aware that Desk Accessories aren't multitasking
either.  Basically they're implemented as drivers and they get events by
an unusual method.  When the foreground application gets an event it has
to check which window the event occurred in; if that window is not one
of its own it explicitly passes the event to a Toolbox routine, which in
turn passes it on to the appropriate desk accessory.  It is trivial for
an application to "steal" desk accessory events by simply not bothering
to pass them on, though this is considered "antisocial".

In fact, there is a mechanism by which a driver (and hence a DA) can
arrange to be run every N ticks (1/60 second), but this also depends on
the application calling the Toolbox periodically, at which point the
countdown timer is checked and drivers are awakened if >N ticks have
elapsed.

There are two other ways in which the Mac OS fails to meet what most
consider minimum requirements for true multitasking.  One is memory
management.  The Mac OS basically doesn't do address translation and
everything runs in the MC68K's Supervisor mode.  This means that an
errant application not only *can* but usually *will* overwrite memory
that doesn't belong to it.  Of course, the toolbox itself is in ROM,
but all system globals are vulnerable, as are other applications.

Lastly, there's the lack of IPC.  This is being addressed by Apple's
Inter-Application Communication facilities (about which I know very
little since they were barely around when I stopped doing Mac stuff),
but I don't see anything comparable to pipes, sockets, mailboxes,
message queues or even signals.  Any communication between entities
under the Mac OS is basically done on a "roll-your-own" basis.

I hope this helps people understand the claims about the Mac OS's status
as a platform for multitasking.  If it seems that I've gone into undue
detail I apologize.



Jeff d'Arcy, Generic Software Engineer - jdarcy@encore.com
      Nothing was ever achieved by accepting reality

john@newave.UUCP (John A. Weeks III) (05/26/90)

In article <37@voa3.UUCP> ck@voa3.UUCP (Chris Kern) writes:
>For those of use who don't have any experience with the Mac OS, could
>someone explain what its deficiencies are with respect to multitasking?

Can the Macintosh System be called an "Operating System"?  Ignoring
system 7.0, the Mac is a collection of procedures, some of which are
in ROM, that everyone agrees to call in the right order.  If anyone
screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
no device or file locking, memory protection, processes, forking, etc.

What many people refer to as the O/S is really the finder program.  And
yes, it is an application just like any other, and you are not required
to use it.  In fact, Apple has a thing called "mini-finder" that allows
you to start Mac programs without running finder.

I am not saying that this is good or bad, but does this qualify as an O/S.
How about MS-DOS?

-john-

-- 
===============================================================================
John A. Weeks III               (612) 942-6969               john@newave.mn.org
NeWave Communications                ...uunet!rosevax!bungia!wd0gol!newave!john
===============================================================================

ac08@vaxb.acs.unt.edu (05/26/90)

In article <402@newave.UUCP>, john@newave.UUCP (John A. Weeks III) writes:
> In article <37@voa3.UUCP> ck@voa3.UUCP (Chris Kern) writes:
>>For those of use who don't have any experience with the Mac OS, could
>>someone explain what its deficiencies are with respect to multitasking?
> 
> Can the Macintosh System be called an "Operating System"?  Ignoring
> system 7.0, the Mac is a collection of procedures, some of which are
> in ROM, that everyone agrees to call in the right order.  If anyone
> screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
> no device or file locking, memory protection, processes, forking, etc.
> 
> What many people refer to as the O/S is really the finder program.  And
> yes, it is an application just like any other, and you are not required
> to use it.  In fact, Apple has a thing called "mini-finder" that allows
> you to start Mac programs without running finder.
> 
> I am not saying that this is good or bad, but does this qualify as an O/S.
> How about MS-DOS?
> 
> -john-
> 

Gee, that's cute.

But according to your definitions, there is no such thing as an "operating
system" on any computer ever made... after all, they're just a bunch of
programs that talk to each other, right?

And UNIX even more so... since most UNIX users I know tend to treat commands
as programs... :)

So- for your next trick, are you going to prove black is white, or that 
1 + 1 = 3?

[Written with more than a little tongue in cheek... the Mac OS is more of
an operating "system" than most, since it *does* have a fairly firm set of 
rules, as opposed to most machines, which have few outside of the coding of
the ROMS and the CPU...]


C Irby

gillies@m.cs.uiuc.edu (05/27/90)

> Can the Macintosh System be called an "Operating System"?  Ignoring
> system 7.0, the Mac is a collection of procedures, some of which are
> in ROM, that everyone agrees to call in the right order.  If anyone
> screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
> no device or file locking, memory protection, processes, forking, etc.
> 
> What many people refer to as the O/S is really the finder program.  

The finder is a shell, not the operating system.  There is a very
beautiful paper by Butler Lampson, I believe, called "An open
Operating System for a Single-User Machine" (circa 82-84).  Basically,
Lampson observes that the advent of the personal computer allows us to
return to the golden days of the 1950's, with a single programmer, a
library, and a dedicated machine.  Apple's OS is a very close
approximation of lampson's ideal environment.

Some of the things that are important in a single-user operating
system are:
   1.  fast reboot ( < 10 secs)
   2.  no protected kernel, to allow easy modification of software
   3.  single address space, to maximize modifiability of software
   4.  automatic scavenger, to repair file system in case of harmful crash
 	(limited scavenging available on macintosh)
   5.  file system (mac file system has write-protect locking, by the way)
   6.  drivers (mac has half a dozen device drivers)
   7.  overlay loader (because early macs (& Alto) had no VM)
   8.  well-developed screen management package (window system, 
	vertical retrace synchronization), including picture language.
   9.  Ubiquitous protocol for serializing data on disk (in resources)
	(xerox CPUs use the courier data encoding format)

By these standards the macintosh system qualifies handsomely as an
operating system.  In fact, until NeWS was written, sun didn't even
have [8], and I wonder if it has [9]?  Maybe SunView does not qualify
as a single-user operating system. 8-)


Don Gillies, Dept. of Computer Science, University of Illinois
1304 W. Springfield, Urbana, Ill 61801      
ARPA: gillies@cs.uiuc.edu   UUCP: {uunet,harvard}!uiucdcs!gillies

JONESD@kcgl1.eng.ohio-state.edu (David Jones) (05/28/90)

In article <3300131@m.cs.uiuc.edu>, gillies@m.cs.uiuc.edu writes:
> There is a very
> beautiful paper by Butler Lampson, I believe, called "An open
> Operating System for a Single-User Machine" (circa 82-84).  Basically,
> Lampson observes that the advent of the personal computer allows us to
> return to the golden days of the 1950's, with a single programmer, a
> library, and a dedicated machine.  Apple's OS is a very close
> approximation of lampson's ideal environment.

If Apple's OS is very close to Lampson's ideal, then Lampson's model is flawed.
The primary complaint against Macintosh OS is the lack of true multi-tasking,
Lampson's ideal environment is only suitable for single-tasking use.

Multi-tasking requires that use of system resources, such as memory and
I/O devices, must be coordinated.  The cooperative approach doesn't work it
pratice, a protected kernel with time-slicing and tighly defined device
interfaces does.

David L. Jones               |      Phone:    (614) 292-6929
Ohio State Unviversity       |      Internet:
1971 Neil Ave. Rm. 406       |               jonesd@kcgl1.eng.ohio-state.edu
Columbus, OH 43210           |               jones-d@eng.ohio-state.edu

Disclaimer: A repudiation of a claim.

seanf@sco.COM (Sean Fagan) (05/28/90)

(Note the followup...)
In article <26200.265dd7be@vaxb.acs.unt.edu> ac08@vaxb.acs.unt.edu writes:
>Gee, that's cute.
>But according to your definitions, there is no such thing as an "operating
>system" on any computer ever made... after all, they're just a bunch of
>programs that talk to each other, right?
>And UNIX even more so... since most UNIX users I know tend to treat commands
>as programs... :)

No.  They are a bunch of programs that talk to the kernel far more often
than they talk to themselves (generally).

Also, please read the article again.  He said "procedures," not programs.
Yes, Virginia, there *is* a difference.

Most people consider that a *true* OS has protection of some sort; that is,
some way of making sure that programs don't step on each other and can live
in peace and harmony. (Sometimes, this is done just to make sure that the
program doesn't step on the OS, true, but it's still nice to be there 8-).)

Ask yourself this: using your "OS," following all of the rules, is it
possible to write a program that will lock up the machine?  On the Mac, I
think it is, under MSDOS it certainly is.  On the Amiga, I don't think it
is, because the rules they laid down were oriented towards multitasking
instead of rapid screen update.  Yet the Amiga doesn't have an MMU, just
like the Mac.  Which one has the OS, then?

>So- for your next trick, are you going to prove black is white, or that 
>1 + 1 = 3?

No, the next trick would be to make sure that people take an OS course
before defining what an OS is.  That's going to be very, very hard, though,
I think.

>[Written with more than a little tongue in cheek... the Mac OS is more of
>an operating "system" than most, since it *does* have a fairly firm set of 
>rules, as opposed to most machines, which have few outside of the coding of
>the ROMS and the CPU...]

Huh?  The Mac, last time I checked, had a lot of routines a programmer could
use, some of which were in ROM, others in RAM, but not much else.  I have
never, for example, seen something that said "do not modify the Status
Register" (N.B.:  It's been a while, I'll admit, and I may have missed it.
If so, I'd be glad to hear about it).

-- 
-----------------+
Sean Eric Fagan  | "It's a pity the universe doesn't use [a] segmented 
seanf@sco.COM    |  architecture with a protected mode."
uunet!sco!seanf  |         -- Rich Cook, _Wizard's Bane_
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

toddpw@tybalt.caltech.edu (Todd P. Whitesel) (05/28/90)

JONESD@kcgl1.eng.ohio-state.edu (David Jones) writes:

>If Apple's OS is very close to Lampson's ideal, then Lampson's model is flawed.
>The primary complaint against Macintosh OS is the lack of true multi-tasking,
>Lampson's ideal environment is only suitable for single-tasking use.

I think you're confusing single/multi-tasking and single/multi-USER.

>Multi-tasking requires that use of system resources, such as memory and
>I/O devices, must be coordinated.  The cooperative approach doesn't work it
>pratice, a protected kernel with time-slicing and tighly defined device
>interfaces does.

Cooperative multitasking is excellent for single user environments because the
"foreground" process can always offer the best response time. Scheduling is not
much of an issue, but there are extra burdens on the programmer and that is the
real problem that Apple ought to deal with.

In a Multi-user environment it is absolutely essential that each process be
isolated, preferably by the hardware. Pre-emptive multitasking is also required
to insure that everyone gets CPU time, but that makes response time very hard
to guarantee.

We're looking at two ends of a stick, people. Both Apple's O/S and Unix are
capable of multi-tasking in a practical sense (meaning that you can have more
than one program in the machine and running simultaneously from the user's
point of view -- heck, an Apple IIgs can do _that_). But the Mac O/S is a lot
more responsive to one user and a Unix box is reasonably responsive to a group
of users.

It's darned hard to get both response time a reliable multi-user operation in
the same box. That's where things are headed, but don't criticize a stepping
stone because it isn't the whole bridge.

Todd Whitesel
toddpw @ tybalt.caltech.edu

henry@utzoo.uucp (Henry Spencer) (05/28/90)

In article <3300131@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu writes:
>beautiful paper by Butler Lampson, I believe, called "An open
>Operating System for a Single-User Machine" (circa 82-84).  Basically,
>Lampson observes that the advent of the personal computer allows us to
>return to the golden days of the 1950's...

One might note that said paper is nearly a decade old, and the more recent
systems produced by Lampson and his colleagues are working hard on getting
back out of the 50s.  That approach had problems as well as virtues.
-- 
As a user I'll take speed over|     Henry Spencer at U of Toronto Zoology
features any day. -A.Tanenbaum| uunet!attcan!utzoo!henry henry@zoo.toronto.edu

cory@three.MV.COM (Cory Kempf) (05/30/90)

ck@voa3.UUCP (Chris Kern) writes:

>For those of use who don't have any experience with the Mac OS, could
>someone explain what its deficiencies are with respect to multitasking?
>Is it that the OS doesn't do time-slicing?  Does a context switch have
>to wait for some event at the application level?  How is multitasking
>implemented?

A few weeks ago, this would have been real easy... All I would have had
to talk about was system 6.0.  Now, (since I have gotten my mits on the
alpha version of System 7) it is a tad more difficult.

Background:

In the beginning, there was the mac, a nice little single tasking
desktop machine.  Nice, but soon, people wanted more: Hence Switcher.
Switcher was a system that would allow several applications to be 
around at the same time.  Only one could execute at a time.  Context
switches were caused by the user (with co-operation from the program).
Later, IBM announced that it was doing a brand new multitasking OS 
for its line of PCs.  Apple, not to be outdone, announced that it 
would also.  IBM/Microsoft decided that an abrupt change would be 
best.  Apple decided that keeping their existing application base
was more important, and decided to migrate to multitasking in three
steps, each a year apart.  They hoped to break less than 5% of the 
Applications at each step.


The world of System 6:

This is the first step.  Cooperative multitasking was implimented, 
as were other things.  A context switch occured when the application
was waiting for the user to do something (the application was 
responsible for making timely calls to the OS to support this).  If
it didn't, background tasks were starved.  While applications had to
be specially written to support the multitasking, a properly written
multitasking supporting application would run in the forground and
the background seemlessly.  Applications that did not explicitly
support multitasking (in general) still allowed it to occur.

Multifinder (The Multitasking version of the Mac's OS) broke on the 
order of 10% of the applications that worked in single finder.

System 7:

Well, Apple slipped a bit... What's a couple of years between friends?
At this step, they have added Virtual Memory, two inter-processes
communications systems, and some other goodies.  One of the IPC systems
is designed to allow say, a drawing program to create a picture, and,
say a word processor, to paste it into the document.  If the user should
change the picture, the WP doc would be automatically updated.  Apple
has sent out an early release of the OS to developers in hopes of not
breaking too many existing Applications.

System 8:

Like, vapourware to the Macs!  :-)

Supposed to complete the journey.  Was due out (according to the original
schedule) this year.  From the rumor mill, is supposed to impliment
preemptive time slicing, etc.

+C

-- 
Cory Kempf				I do speak for the company (sometimes).
Three Letter Company						603 883 2474
email: cory@three.mv.com, harvard!zinn!three!cory

sjc@key.COM (Steve Correll) (05/30/90)

In article <402@newave.UUCP>, john@newave.UUCP (John A. Weeks III) writes:
> Can the Macintosh System be called an "Operating System"?  Ignoring
> system 7.0, the Mac is a collection of procedures, some of which are
> in ROM, that everyone agrees to call in the right order.  If anyone
> screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
> no device or file locking, memory protection, processes, forking, etc.

Suppose you implemented the C library standalone on a bare machine, adding the
capability to execute multiple C programs by switching from one to another
during calls to the library. Would that be an operating system? People
accustomed to conventional timesharing systems might answer "no", but people
accustomed to simple ROM-able operating systems might answer "yes".

The Mac OS is a deluxe, highly graphical version of that approach. Myself, I
think Apple's advertising slogan ought to be "The User Interface _is_ the
Computer".
-- 
...{sun,pyramid}!pacbell!key!sjc 				Steve Correll

philip@Kermit.Stanford.EDU (Philip Machanick) (05/31/90)

In article <1935@key.COM>, sjc@key.COM (Steve Correll) writes:
> In article <402@newave.UUCP>, john@newave.UUCP (John A. Weeks III) writes:
> > Can the Macintosh System be called an "Operating System"?  Ignoring
> > system 7.0, the Mac is a collection of procedures, some of which are
> > in ROM, that everyone agrees to call in the right order.  If anyone
> > screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
> > no device or file locking, memory protection, processes, forking, etc.
> 
> Suppose you implemented the C library standalone on a bare machine,
adding the
> capability to execute multiple C programs by switching from one to another
> during calls to the library. Would that be an operating system? People
> accustomed to conventional timesharing systems might answer "no", but people
> accustomed to simple ROM-able operating systems might answer "yes".

This "is the Mac OS an OS" line seems to assume that an OS _only_ defines
multitasking. I always thought it defined a whole bunch of abstractions, like
the file system. The Mac does most of this conventional stuff, plus a sort
of abstract machine model for graphics. Maybe the implementation is not great
because you can break the abstractions too easily, but that's a different
issue.

Philip Machanick
philip@pescadero.stanford.edu

edwardj@microsoft.UUCP (Edward JUNG) (06/01/90)

In article <1990May28.083518.26003@laguna.ccsf.caltech.edu> toddpw@tybalt.caltech.edu (Todd P. Whitesel) writes:
>Cooperative multitasking is excellent for single user environments because the
>"foreground" process can always offer the best response time. Scheduling is not
>much of an issue, but there are extra burdens on the programmer and that is the
>real problem that Apple ought to deal with.

Cooperative multitasking does not guarantee best response time for the foreground
process.  Actually, it offers the possibility of exceptional and uncontrolled
degradation of the foreground process.  If a foreground process IS a foreground
process (that implies that there are one or more background processes), then
any time it gives a timeslice away, it may not get control back for a long time.

Indeed, a pre-emptive multitasking system that was optimized toward foreground
tasks can give BETTER response to the user than cooperative multitasking because
the scheduler could give guaranteed responses to the foreground process (note the
wonderfully helpful fact that there is only ONE foreground process, making the
real-time scheduling problem rather simpler).

Now it may be true that very few systems give real-time overrides or guaranteed
response times to foreground tasks (though there are some systems that do!), but
that is an issue largely orthogonal to that of cooperative vs. preemptive
multitasking.

The thing that pre-emptive multitasking does is give guarantees against poor
code.  The Macintosh (and Windows) have gained alot of mileage with cooperative
multitasking (note that this does not apply to ALL Windows apps) solely because
their apps give up time relatively frequently.  Even so, any single misbehaved
application in the background can utterly destroy the preformance of a correctly
written application in the foreground.

Now whether the guarantees of pre-emptive multitasking are correctly architected
or are even required (given the quality of applications) is another area of debate.

>Todd Whitesel
>toddpw @ tybalt.caltech.edu



Edward Jung
Systems Architecture
Microsoft Corp.

firth@sei.cmu.edu (Robert Firth) (06/01/90)

In article <54992@microsoft.UUCP> edwardj@microsoft.UUCP (Edward JUNG) writes:

>Cooperative multitasking does not guarantee best response time for the
>foreground process.  Actually, it offers the possibility of exceptional
>and uncontrolled degradation of the foreground process.

Right on!  I get to share a Mac with the rest of this floor.  It is a
Mac IIcx, with power, size, and capability that would have been
unbelievable 10 years ago.  It has a print spooler.

Now, a print spooler doesn't take that many cycles.  It's a traditional
background task.  So, when the spooler is running, how come the screen
can go dead for 15 or 20 seconds?  (I know; I timed it yesterday)  Why,
because it has to wait for the spooler to kindly decide to relinquish
control of the CPU.

So, here I am, with approximately 100 times the machine resources of
the PDP-11/45 that I used to share with 8 or 10 other timesharing
users, and the response time can be 100 times worse.  That's a
degradation of four orders of magnitude, all due to one key design
decision.

The ability of hardware engineers to enhance is far outmatched by
the ability of software engineers to degrade.  And you may quote me.

jap@convex.msu.edu (Joe Porkka) (06/01/90)

firth@sei.cmu.edu (Robert Firth) writes:

>In article <54992@microsoft.UUCP> edwardj@microsoft.UUCP (Edward JUNG) writes:

>>Cooperative multitasking does not guarantee best response time for the
>>foreground process.  Actually, it offers the possibility of exceptional
>>and uncontrolled degradation of the foreground process.

>Right on!  I get to share a Mac with the rest of this floor.  It is a
>Mac IIcx, with power, size, and capability that would have been
>unbelievable 10 years ago.  It has a print spooler.

er, uh.... Not to start a "MY computer is better than YOUR computer
	religious war", but I think it may be time to upgrade
	your Mac IIcx to an Amiga 3000. Round robin prioritized 
	*preemptive* scheduler.

    It does not by default give the "foreground" process higher priority,
    (it does not differentiate between fore/background things - they
    all be tasks to it), but a simple PD hack will up the priority of the
    task controlling the window that is active - giving the desired effect.

minich@d.cs.okstate.edu (Robert Minich) (06/02/90)

>>Right on!  I get to share a Mac with the rest of this floor.  It is a
>>Mac IIcx, with power, size, and capability that would have been
>>unbelievable 10 years ago.  It has a print spooler.
> 
> er, uh.... Not to start a "MY computer is better than YOUR computer
> 	religious war", but I think it may be time to upgrade
> 	your Mac IIcx to an Amiga 3000. Round robin prioritized 
> 	*preemptive* scheduler.

  How about something much easier? Why not just rap on the noggins of
those who wrote the spooler in the first place to make it more friendly?
I find it pretty darn easy to find good places to give up cpu time, even 
with code I didn't write. Don't get me wrong; when (someday) the Mac does
preemption, I won't complain. But I hardly think it is _only_ way to
achieve smooth multitasking. For the spooler situation, I'd rather we
had smarter printers with hard disks (and Virtual memory!) or extended
file servers (a definite option today) than waste some resources on my
personal machine.

-- 
| _    /| | Robert Minich             |
| \'o.O'  | Oklahoma State University |  
| =(___)= | minich@a.cs.okstate.edu   | 
|    U    | - Bill sez "Ackphtth"     |

csimmons@jewel.oracle.com (Charles Simmons) (06/02/90)

In article <1990May30.230248.6200@Neon.Stanford.EDU>,
philip@Kermit.Stanford.EDU (Philip Machanick) writes:
> From: philip@Kermit.Stanford.EDU (Philip Machanick)
> Subject: Re: Macintosh OS
> Date: 30 May 90 23:02:48 GMT
> 
> In article <1935@key.COM>, sjc@key.COM (Steve Correll) writes:
> > In article <402@newave.UUCP>, john@newave.UUCP (John A. Weeks III) writes:
> > > Can the Macintosh System be called an "Operating System"?  Ignoring
> > > system 7.0, the Mac is a collection of procedures, some of which are
> > > in ROM, that everyone agrees to call in the right order.  If anyone
> > > screws up, you get a bomb.  There is no real multi-tasking, no scheduler,
> > > no device or file locking, memory protection, processes, forking, etc.
> > 
> > Suppose you implemented the C library standalone on a bare machine,
> adding the
> > capability to execute multiple C programs by switching from one to another
> > during calls to the library. Would that be an operating system? People
> > accustomed to conventional timesharing systems might answer "no",
but people
> > accustomed to simple ROM-able operating systems might answer "yes".
> 
> This "is the Mac OS an OS" line seems to assume that an OS _only_ defines
> multitasking. I always thought it defined a whole bunch of abstractions, like
> the file system. The Mac does most of this conventional stuff, plus a sort
> of abstract machine model for graphics. Maybe the implementation is not great
> because you can break the abstractions too easily, but that's a different
> issue.
> 
> Philip Machanick
> philip@pescadero.stanford.edu

This "is the Mac OS an OS" line seems to assume that an OS defines a
gruntload of really strange abstractions:  like what your graphical user
interface should look like.  I always thought an OS should define a very
minimal number of abstractions:  like how the cpu resource is allocated
to different processes (scheduling); like how the memory resource is
allocated to different processes; and like how inter-process communications
is performed.  File systems?  Relational databases?  TCP/IP?  Let them
be done by user-level processes.

-- Chuck

dittman@skbat.csc.ti.com (Eric Dittman) (06/03/90)

In article <1990Jun1.141403.19240@msuinfo.cl.msu.edu>, jap@convex.msu.edu (Joe Porkka) writes:
> firth@sei.cmu.edu (Robert Firth) writes:
> 
>>In article <54992@microsoft.UUCP> edwardj@microsoft.UUCP (Edward JUNG) writes:
> 
>>>Cooperative multitasking does not guarantee best response time for the
>>>foreground process.  Actually, it offers the possibility of exceptional
>>>and uncontrolled degradation of the foreground process.
> 
>>Right on!  I get to share a Mac with the rest of this floor.  It is a
>>Mac IIcx, with power, size, and capability that would have been
>>unbelievable 10 years ago.  It has a print spooler.
> 
> er, uh.... Not to start a "MY computer is better than YOUR computer
> 	religious war", but I think it may be time to upgrade
> 	your Mac IIcx to an Amiga 3000. Round robin prioritized 
> 	*preemptive* scheduler.

er, uh.... Not to start a "MY print spooler is better than YOUR print
      spooler", but I think it may be time to change print spoolers.
      Some Mac print spoolers aren't that good.  Changing print spoolers
      is a lot cheaper than changing computers, plus you still get to
      use all your old applications.

Eric Dittman
Texas Instruments - Component Test Facility
dittman@skitzo.csc.ti.com
dittman@skbat.csc.ti.com

Disclaimer:  I don't speak for Texas Instruments or the Component Test
             Facility.  I don't even speak for myself.

jesup@cbmvax.commodore.com (Randell Jesup) (06/03/90)

In article <54992@microsoft.UUCP> edwardj@microsoft.UUCP (Edward JUNG) writes:
>The thing that pre-emptive multitasking does is give guarantees against poor
>code.  The Macintosh (and Windows) have gained alot of mileage with cooperative
>multitasking (note that this does not apply to ALL Windows apps) solely because
>their apps give up time relatively frequently.  Even so, any single misbehaved
>application in the background can utterly destroy the preformance of a correctly
>written application in the foreground.

	Note that in this case, "poor code" can mean anything that runs for
a signifigant period without explicitly giving up the processor.  This includes
most "standard" C programs which use appreciable processor time, such as
ray tracers.  It also means that background tasks doing IO may not work well
or at all if the frontmost application doesn't give up the cpu VERY often,
since various buffers may overflow, etc.

disclaimer: I'm biased, of course, since I maintain a pre-emptive MT OS on a
personal computer.

-- 
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, jesup@cbmvax.cbm.commodore.com  BIX: rjesup  
Common phrase heard at Amiga Devcon '89: "It's in there!"

seanf@sco.COM (Sean Fagan) (06/04/90)

(Note, once again, the followup line.)
From: philip@Kermit.Stanford.EDU (Philip Machanick)
> This "is the Mac OS an OS" line seems to assume that an OS _only_ defines
> multitasking. 

Ok, how about this:  a single-tasking "OS" is a true OS if a program written
for it can be moved to a multi-tasking version of said OS without breaking
in anyway.  That is, if it worked before, it should work now.

I don't believe the Mac does this properly, and that's why I think of it as
a non-OS.  It would be possible, I imagine, to have a "true" MacOS, that
multitasked, and trapped each and every trap, decided what should be done
about it, and then continued the process (for example, doing a bunch of
windows on the screen by trapping screen writes, and then only showing a
certain portion of the "screen" in the window).  But I would, I think,
consider that an emulation package, a la DOS-under-unix.

(I'm not sure whether things like dereferencing NULL, which would probably
be a bug [but not necessarily!  The design could depend on it!], causing a
fatal exception of some short should count when going single-user to
multi-user.)

-- 
-----------------+
Sean Eric Fagan  | "It's a pity the universe doesn't use [a] segmented 
seanf@sco.COM    |  architecture with a protected mode."
uunet!sco!seanf  |         -- Rich Cook, _Wizard's Bane_
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

peter@ficc.ferranti.com (Peter da Silva) (06/04/90)

In article <1990Jun2.132847.14292@oracle.com> csimmons@oracle.com writes:
> This "is the Mac OS an OS" line seems to assume that an OS defines a
> gruntload of really strange abstractions:  like what your graphical user
> interface should look like.  I always thought an OS should define a very
> minimal number of abstractions:  like how the cpu resource is allocated
> to different processes (scheduling)...

Exactly. An operating system is basically a resource manager for programs.
And one of the most important resources avalable is CPU time. An operating
system that does not manage that resource is so primitive as to barely
qualify for the name.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

ac08@vaxb.acs.unt.edu (06/05/90)

In article <Y5X3=SB@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes:
> In article <1990Jun2.132847.14292@oracle.com> csimmons@oracle.com writes:
>> This "is the Mac OS an OS" line seems to assume that an OS defines a
>> gruntload of really strange abstractions:  like what your graphical user
>> interface should look like.  I always thought an OS should define a very
>> minimal number of abstractions:  like how the cpu resource is allocated
>> to different processes (scheduling)...
> 
> Exactly. An operating system is basically a resource manager for programs.
> And one of the most important resources avalable is CPU time. An operating
> system that does not manage that resource is so primitive as to barely
> qualify for the name.
> -- 
> Peter da Silva

Oh, yeah, real important.  For most small (single-user) machines, the CPU is
really "working" at about 1% of capacity... and the few times it's up
to that capacity, it's usually doing something to interface with a user...

Sorry- come up with a better argument.... you were doing better with the
"memory management" thing.  People run out of memory a lot more than they
"peg the needle" with the CPU...  And those preemptive multitasking systems
suck RAM like nobody's business...


C Irby
ac08@vaxb.acs.unt.edu
ac08@untvax

jejones@mcrware.UUCP (James Jones) (06/05/90)

In article <26437.266ae612@vaxb.acs.unt.edu> ac08@vaxb.acs.unt.edu writes:
>And those preemptive multitasking systems
>suck RAM like nobody's business...

Gee, that would be news to those who have used OS-9/6809 Level One (designed
to work in a single 64K address space) for years.  (Admittedly, when I composed
a mail reply (which bounced, alas), I was on a Level Two system, but even with
windowing and a 96K RAM disk, 512K is fairly comfortable.)  For that matter, I
wouldn't call OS-9/68K a memory hog...

	James Jones

peter@ficc.ferranti.com (Peter da Silva) (06/06/90)

I said: an O/S is basically a resource manager, and CPU time is one of the
most important resources.

In article <26437.266ae612@vaxb.acs.unt.edu> ac08@vaxb.acs.unt.edu writes:
> Oh, yeah, real important.  For most small (single-user) machines, the CPU is
> really "working" at about 1% of capacity... and the few times it's up
> to that capacity, it's usually doing something to interface with a user...

The fact that you have a lot of CPU time to manage, or a little, is pretty
much irrelevant. The point is that it's got to be allocated. On the Mac this
is done by hand, by each and every application program. Some programs are
specially written to do a better job of this... these are called Desk
Accessories. they have to stay in memory all the time, or you don't have
them available. That's why, on the Mac...

> People run out of memory a lot more than they
> "peg the needle" with the CPU...

... and on the IBM-PC, too. Because to have a program available at short
notice it has to be pretty much loaded at boot time. Or you have to load
a kludgey context switcher that requires you pre-allocate memory. The reason
the Mac chews memory is that it doesn't have...

> And those preemptive multitasking systems
> suck RAM like nobody's business...

Nope. They free it, by making all your tools available at any time, whether
the guy who wrote the application you're using was a good programmer or a
mediocre one. I still have a 512K Amiga 1000, and it's still got more
horsepower left over for *me* than your multimeg Mac-II-whatever.

Remember... time is not conserved. Memory is.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (06/06/90)

In article <26437.266ae612@vaxb.acs.unt.edu> ac08@vaxb.acs.unt.edu writes:
>In article <Y5X3=SB@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes:
>> Exactly. An operating system is basically a resource manager for programs.
>> And one of the most important resources avalable is CPU time. An operating
>> system that does not manage that resource is so primitive as to barely
>> qualify for the name.

>Oh, yeah, real important.  For most small (single-user) machines, the CPU is
>really "working" at about 1% of capacity... and the few times it's up
>to that capacity, it's usually doing something to interface with a user...

Irrelevant. "How much" it's working isn't nearly as important as what
it's working on.  If some background process has grabbed the CPU and
won't let me into my editor it doesn't _matter_ if this process
memory-bound, disk-bound, or hide-bound. From my point of view I'm not
getting anything done. Without a scheduler to give me the CPU while
the other process is waiting I really am wasting the CPU.

Gilbert Pilz Jr. "sick, and proud of it" gil@banyan.com

minich@d.cs.okstate.edu (Robert Minich) (06/06/90)

peter@ficc.ferranti.com (Peter da Silva):
| The fact that you have a lot of CPU time to manage, or a little, is pretty
| much irrelevant. The point is that it's got to be allocated. On the Mac this
| is done by hand, by each and every application program. Some programs are
| specially written to do a better job of this... these are called Desk
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| Accessories. they have to stay in memory all the time, or you don't have
  ^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| them available. That's why, on the Mac...
  ^^^^^^^^^^^^^^^
| 
|> People run out of memory a lot more than they
|> "peg the needle" with the CPU...
| 
| ... and on the IBM-PC, too. Because to have a program available at short
| notice it has to be pretty much loaded at boot time. Or you have to load
| a kludgey context switcher that requires you pre-allocate memory. The reason
| the Mac chews memory is that it doesn't have...
| 
|> And those preemptive multitasking systems
|> suck RAM like nobody's business...
| 
| Nope. They free it, by making all your tools available at any time, whether
| the guy who wrote the application you're using was a good programmer or a
| mediocre one. I still have a 512K Amiga 1000, and it's still got more
| horsepower left over for *me* than your multimeg Mac-II-whatever.

Bzzzt. Lets clear up a couple misconceptions, here. First, the is no
requirement that desk accessories be any more graceful about yielding
CPU than any application. Second, DA's are not loaded at boot time as
implied above. They are loaded on demand.
  Something that is even more relevant to this whole discussion is that
not all programs get CPU while they are in the background. There is
twiddleable bit that determines this. This bit was added with the
release of MultiFinder. Setting it implies that the program was written
to actually do something in the background, acknowledging the
limitations of MultiFinder and taking the responsibility to yield CPU
time frequently enough that the foreground application does not become
sluggish. If this bit isn't set, the program will never get any time in
background (exception: unless it has to update the contents of a
window). 
  So the whole point is that any background app that hogs the CPU was
written irresponsibly, even though it may come from Apple. (MacWrite and
MacPaint both did naughty things!) 

  Let me say again that preemptive multitasking is not a necessity for
99% of the Mac community. I won't mind when the Mac does do preemption,
though. I have yet to see any reason other than poor programming that
cooperative multitasking may be unacceptable for most people.
-- 
| _    /| | Robert Minich             |
| \'o.O'  | Oklahoma State University |  
| =(___)= | minich@d.cs.okstate.edu   | 
|    U    | - Bill sez "Ackphtth"     |

peter@ficc.ferranti.com (Peter da Silva) (06/06/90)

In article <1990Jun6.055847.14995@d.cs.okstate.edu> minich@d.cs.okstate.edu (Robert Minich) writes:
>   Let me say again that preemptive multitasking is not a necessity for
> 99% of the Mac community. I won't mind when the Mac does do preemption,
> though. I have yet to see any reason other than poor programming that
> cooperative multitasking may be unacceptable for most people.

How about the fact that programmers may have better things to do with their
time than warp code to fit into the windowing universe? I realise that on
the mac 90% of the programs are 90% user-interface, but that's not always
the best way to do things. A compiler, for example, really has no business
calling GetNextEvent *ever*.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (06/07/90)

In article <1990Jun6.055847.14995@d.cs.okstate.edu> 
	minich@d.cs.okstate.edu (Robert Minich) writes:

>  So the whole point is that any background app that hogs the CPU was
>written irresponsibly, even though it may come from Apple. (MacWrite and
>MacPaint both did naughty things!) 

>  Let me say again that preemptive multitasking is not a necessity for
>99% of the Mac community. I won't mind when the Mac does do preemption,
>though. I have yet to see any reason other than poor programming that
>cooperative multitasking may be unacceptable for most people.

Hmmm. Several major products from Apple both did naughty things?

"As a rule software systems do not work well until they have been
used, and have failed repeatedly, in real applications."
 - Dave Parnas, Commun. ACM (33, 6 June 1990 p.636) 

When I started programming in the 60's, I noticed that many people
acted as if their next program would be perfect, and would run
correctly the first time.  The fact that this had never happened to
them didn't seem to influence their behavior.  Only the best
programmers factored fallibility into the design process.  I know
that CS education is better now: but I refuse to believe in silver
bullets.  Nor does Apple: they do intend to eliminate this blemish.



-- 
Don		D.C.Lindsay 	Carnegie Mellon Computer Science

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (06/07/90)

In article <9548@pt.cs.cmu.edu>, lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes:
>>  So the whole point is that any background app that hogs the CPU was
>>written irresponsibly, even though it may come from Apple. (MacWrite and
>>MacPaint both did naughty things!) 
>
>Hmmm. Several major products from Apple both did naughty things?
[ Other stuff cut...]
>.....  Nor does Apple: they do intend to eliminate this blemish.

MacPaint and MacWrite were written as quick and dirty applications to show
what the Mac is capable of, and were (probably, I don't know 100%) developed
before Apple's "Thou shall write apps to follow these standards." 

MacWrite got turned into MacWrite II, and I don't know what happend to
MacPaint; I think most people started using SuperPaint and lived happily
ever after. 

				Doug

daveo@Apple.COM (David M. O'Rourke) (06/07/90)

peter@ficc.ferranti.com (Peter da Silva) writes:
>the best way to do things. A compiler, for example, really has no business
>calling GetNextEvent *ever*.

  What if the user, a programmer in this case, wants to stop the compile??

  Even though a compiler may be primarily a batch oriented process, there is
still a user which needs to be serviced.  In addition more complex compiler
designs of the future might call for more programmer interaction when it
finds a syntax error, allowing the programmer to fix the syntax error and
continue.

  I basically agree with your point, but I think you're limiting your view 
of what a compiler should be to what compilers current are, there are reason's
and examples for user interaction in almost any well thoughtout software
tool.

  But that's just my $0.02 worth... :-)
-- 
daveo@apple.com                                               David M. O'Rourke

"Hey where'd you learn to shoot like that?" ... "At the 7-11."
     -- Marty McFly (Back to the future III)
_______________________________________________________________________________
I do not speak for Apple in any official sense.

gft_robert@gsbacd.uchicago.edu (06/07/90)

------- 
In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes...

>How about the fact that programmers may have better things to do with their
>time than warp code to fit into the windowing universe? I realise that on
>the mac 90% of the programs are 90% user-interface, but that's not always
>the best way to do things. A compiler, for example, really has no business
>calling GetNextEvent *ever*.


And if the user wants to interrupt the compilation mid-compile?  You'd better
have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
seems the proper way to do this to me.

You may indeed have to change some of your code to run properly on the Mac.  Or
put another way: you may have to change some of your code to put the user in
complete control.  The above example as a case in point.

Robert





============================================================================
= gft_robert@gsbacd.uchicago.edu * generic disclaimer: * "It's more fun to =
=            		         * all my opinions are *  compute"         =
=                                * mine                *  -Kraftwerk       =
============================================================================

iyengar@grad2.cis.upenn.edu (Anand Iyengar) (06/07/90)

In article <44.2667b118@skbat.csc.ti.com> dittman@skbat.csc.ti.com (Eric Dittman) writes:
>In article <1990Jun1.141403.19240@msuinfo.cl.msu.edu>, jap@convex.msu.edu (Joe Porkka) writes:
>> 	your Mac IIcx to an Amiga 3000. Round robin prioritized 
>      is a lot cheaper than changing computers, plus you still get to
>      use all your old applications.
	Actually, you may still be able to do this after going to an Amiga.
There's supposedly a kit which lets the Amiga run a lot of MacSoftware.  I
have no experience with it, though (anyone care to comment on how good it
really is, and what breaks?).  IMHO, what I've seen of the Amiga's OS is much
nicer than the Mac's.  As others have pointed out, Multifinder has some
shortcomings.

							Anand.  
--
"You still have to think.  That shouldn't be required."
iyengar@eniac.seas.upenn.edu
--- Lbh guvax znlor vg'yy ybbx orggre ebg-guvegrrarg? ---
Disclaimer:  It's a forgery.  

JONESD@kcgl1.eng.ohio-state.edu (David Jones) (06/07/90)

You may indeed have to change some of your code to run properly on the Mac.  Or
put another way: you may have to change some of your code to put the user in
complete control.  The above example as a case in point.

Robert





============================================================================
= gft_robert@gsbacd.uchicago.edu * generic disclaimer: * "It's more fun to =
=            		         * all my opinions are *  compute"         =
=                                * mine                *  -Kraftwerk       =
============================================================================

David L. Jones               |      Phone:    (614) 292-6929
Ohio State Unviversity       |      Internet:
1971 Neil Ave. Rm. 406       |               jonesd@kcgl1.eng.ohio-state.edu
Columbus, OH 43210           |               jones-d@eng.ohio-state.edu

Disclaimer: A repudiation of a claim.

minich@d.cs.okstate.edu (Robert Minich) (06/07/90)

by iyengar@grad2.cis.upenn.edu (Anand Iyengar):
> There's supposedly a kit which lets the Amiga run a lot of MacSoftware.  I
> have no experience with it, though (anyone care to comment on how good it
> really is, and what breaks?).  

  Unfortunately, turning an Amiga into a Mac, while possibly saving
money, does not solve the problem since it would still be the exact same
software albeit different hardware. (Not to mention you only get the
functionality of a quick Mac Plus or SE!) Now, if you want to look into 
A/UX 2.0, then we can start to have some fun running multiple OS's. :-)

-- 
| _    /| | Robert Minich             |
| \'o.O'  | Oklahoma State University |  
| =(___)= | minich@d.cs.okstate.edu   | 
|    U    | - Bill sez "Ackphtth"     |

JONESD@kcgl1.eng.ohio-state.edu (David Jones) (06/07/90)

gft_robert@gsbacd.uchicago.edu writes:
>In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes...
>
>>A compiler, for example, really has no business
>>calling GetNextEvent *ever*.
>
>And if the user wants to interrupt the compilation mid-compile?  You'd better
>have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
>seems the proper way to do this to me.

Traditionally, the way this happens is that a hardware interrupt (e.g.
hitting a keyboard key) causes supervisory code to execute that monitors
the interrupts for a sequence which means "stop what you're doing".  If a
stop request is detected, the supervisory code alters the saved context of
the task to cause it's execution to cease, hopefully in a graceful manner.
In the case of most programs, including compilers, graceful rundown can
be handled by the run-time environment, so the programmer can write his
code without any conscious awareness that the user may want to interrupt it.

Do Macintosh keyboards generate interrupts (talking about hardware in
comp.arch, how strange :-)?  The software environment sure wants to behave
as if the I/O devices are strictly polled.

David L. Jones               |      Phone:    (614) 292-6929
Ohio State Unviversity       |      Internet:
1971 Neil Ave. Rm. 406       |               jonesd@kcgl1.eng.ohio-state.edu
Columbus, OH 43210           |               jones-d@eng.ohio-state.edu

Disclaimer: A repudiation of a claim.

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (06/07/90)

In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes...
> How about the fact that programmers may have better things to do with their
> time than warp code to fit into the windowing universe? I realise that on
> the mac 90% of the programs are 90% user-interface, but that's not always
> the best way to do things. A compiler, for example, really has no business
> calling GetNextEvent *ever*.
In article <1990Jun6.222126.2888@midway.uchicago.edu>, gft_robert@gsbacd.uchicago.edu writes:
: And if the user wants to interrupt the compilation mid-compile?  You'd better
: have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
: seems the proper way to do this to me.

I can't believe I just read this.  If you want to interrupt a compilation
part-way through, you want a way which is *sure* to work, not a way which
*might* work if the compiler-writer happened to remember and the compiler
doesn't happen to be busy chasing its tail in a place which couldn't
possibly go into an infinite loop but did.  What would you think of a
telephone which would let you place a 911 call _sometimes_, if someone else
using the exchange remember to press a magic button on _his_ phone every
few seconds?

As for user interaction with compilers; that's fine in a program development
system, but what if I have an application where a C program writes a short
Prolog program which writes a combined C/SQL program, compiles that, and then
runs it?  Why should someone sitting in front of the screen ever have to
interact with the C compiler that's looking at the C/SQL program?  What does
_she_ know about the program it's compiling?

If I have a well-written compiler from another environment (just for the
sake of argument, let's suppose it compiles Scheme to C and then calls the
C compiler), why should I have to go through it planting calls to some
routine that has nothing to do with what the compiler itself is about?
How is anyone but the original authors to know where to call FrobEventNext
or whatever it's called, and why should _they_ have thought about it, given
that they wrote for other operating systems lacking this restriction?

The Interlisp-D environment let you interrupt anything you liked using
window and mouse:  in the background you could pop up a PSW window with
a menu of all the processes, select the one you wanted and then select
interrupt and it got a signal.  (Or you could give a window keyboard
focus and then type an interrupt character.  Much the same thing.)  A
process could _protect_ itself explicitly, but the default was to be
interruptable.  You could put a process to sleep and wake it up again.
Interlisp-D is a single-address-space system, just like the Mac, except
for using virtual memory.

The Mac has all the hardware needed to do this.  Take this to comp.os
-- 
"A 7th class of programs, correct in every way, is believed to exist by a
few computer scientists.  However, no example could be found to include here."

martens@canoe.cis.ohio-state.edu (Jeff Martens) (06/07/90)

In article <41684@apple.Apple.COM> daveo@Apple.COM (David M. O'Rourke) writes:
>peter@ficc.ferranti.com (Peter da Silva) writes:
>>the best way to do things. A compiler, for example, really has no business
>>calling GetNextEvent *ever*.

>  What if the user, a programmer in this case, wants to stop the compile??

This isn't the compiler writer's, or the compiler's concern.  The user
should send a break to the compiler, as under most operating systems.
I guess on the Mac you reboot.

>  Even though a compiler may be primarily a batch oriented process, there is
>still a user which needs to be serviced.  In addition more complex compiler
>designs of the future might call for more programmer interaction when it
>finds a syntax error, allowing the programmer to fix the syntax error and
>continue.

If the user needs to be serviced, let him get his service in another
window.  With pre-emptive multitasking, this isn't a problem.  It just
looks like a problem to some people because they use archaic systems
that don't multitask, or barely multitask, i.e., non-preemptively.

>  I basically agree with your point, but I think you're limiting your view 
>of what a compiler should be to what compilers current are, there are reason's
>and examples for user interaction in almost any well thoughtout software
>tool.

But you didn't give any, now did you?  If all you have is a hammer,
every problem looks like a nail.  If all you have is multifinder, then
any attempt at transparent multitasking looks impossible.
-=-
-- Jeff (martens@cis.ohio-state.edu)

Chemlawn, trademark, suburban distributor of toxic chemicals.

bruner@sp15.csrd.uiuc.edu (John Bruner) (06/07/90)

In article <:SY35CD@xds13.ferranti.com>, peter@ficc (Peter da Silva) writes:
>How about the fact that programmers may have better things to do with their
>time than warp code to fit into the windowing universe? I realise that on
>the mac 90% of the programs are 90% user-interface, but that's not always
>the best way to do things. A compiler, for example, really has no business
>calling GetNextEvent *ever*.

This is precisely one of my complaints about windowing systems in
general.  The implicit assumption seems to be that all programs should
be restructured into big event loops.  An application which doesn't
call GetNextEvent() or XtAppProcessEvent() or whatever on a regular
basis is "not well-behaved."  Never mind that the application might
have some complex long-running task to do.  To operate "properly" it
must artificially break up its computation into small pieces that can
be executed in between calls to the event handler.  Theoretically this
can always be done, but is the added complexity during programming,
debugging, and maintenance worth it?

The Macintosh carries this idea further -- an application which isn't
"well behaved" in this sense not only affects its own user-interface
but also the execution of every other program in the system.

An operating system is supposed to provide a set of facilities which
support the development, debugging, and execution of programs.
Cooperative multitasking doesn't do this -- the development of a
program is *harder*, not easier in this environment.  If the program
goes into an infinite loop, what do you do?  (On the Mac, you press
the button which generates a non-maskable-interrupt, trap to the
firmware debugger, hope that the interrupt didn't come at the wrong
time, patch in an "exit-to-shell" system call, and try to continue
with the rest of the applications.  With a real operating system you
type a special character or command and abort the offending task
without affecting anything else.)

By contrast, consider virtual memory.  It is a very useful facility
because each process can be written as though it had a large, private,
physically-continugous address space, with system-supported walls
between applications.  Multiprogramming similarly should enable each
process to run as if it had the entire machine to itself, but the
"walls" in a cooperative multitasking system are thin indeed.  How
useful would virtual memory be if each process were required to
service page faults and disc completion "events" for other processes
running at the same time?
--
John Bruner	Center for Supercomputing R&D, University of Illinois
	bruner@csrd.uiuc.edu		(217) 244-4476	

peter@ficc.ferranti.com (Peter da Silva) (06/07/90)

In article <41684@apple.Apple.COM> daveo@Apple.COM (David M. O'Rourke) writes:
> peter@ficc.ferranti.com (Peter da Silva) writes:
> >A compiler, for example, really has no business
> >calling GetNextEvent *ever*.

>   What if the user, a programmer in this case, wants to stop the compile??

Then you hit ^C and abort that process.

>   Even though a compiler may be primarily a batch oriented process, there is
> still a user which needs to be serviced.  In addition more complex compiler
> designs of the future might call for more programmer interaction when it
> finds a syntax error, allowing the programmer to fix the syntax error and
> continue.

These are fine for initial coding and debugging, but that's a different
animal from a production compiler.

Also, it's generally more useful to fix a bunch of errors at once. You know:
don't stop at one bug.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (06/07/90)

Found in the RISKS newsgroup:

:Date: Wed, 6 Jun 90 09:08:04 PDT
:From: jaime@tcville.hac.com (Jaime Villacorte)
:Subject: New computerized scoring system fails during Indy 500
:
:        The following appeared in an article by Tim Considine in the June 4,
:1990 issue of Autoweek. It concerned the use of a new computerized scoring
:system manufactured by Dorian Industries, an Autralian electronics firm for
:use in the recent Indianapolis 500 race.
:
:        "Data-1, as the system is known is arguably the most advanced and
:        foolproof scoring system in the world. Well almost foolproof." [...]
:
:        "...all monitors went blank on Lap 130 of the race.
:          The cause of such a catastrophe: A laser printer ran out of paper
:        and the system froze. A simple problem, but one that hadn't been
:        simulated during testing.


Gee, I wonder how that could have happened.
-- 
Don		D.C.Lindsay 	leaving CMU .. make me an offer!

seanf@sco.COM (Sean Fagan) (06/08/90)

(Note followup again, ok?)

In article <41684@apple.Apple.COM> daveo@Apple.COM (David M. O'Rourke) writes:
>  What if the user, a programmer in this case, wants to stop the compile??

Uhm, Unix handles that just fine.  So did RSTS, RT11, VMS, etc.  Why should
an application program have to implement parts of the OS in order to be
useful?

>"Hey where'd you learn to shoot like that?" ... "At the 7-11."

Not quite.  The response was "7-11".  Only.
(8-))

-- 
-----------------+
Sean Eric Fagan  | "It's a pity the universe doesn't use [a] segmented 
seanf@sco.COM    |  architecture with a protected mode."
uunet!sco!seanf  |         -- Rich Cook, _Wizard's Bane_
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

sbrooks@beaver..UUCP (Steve Brooks) (06/08/90)

In article <41684@apple.Apple.COM> daveo@Apple.COM (David M. O'Rourke) writes:
>peter@ficc.ferranti.com (Peter da Silva) writes:
>>the best way to do things. A compiler, for example, really has no business
>>calling GetNextEvent *ever*.
>
>  What if the user, a programmer in this case, wants to stop the compile??
>

That's the responsibility of the operating system.


This is exactly my biggest complaint about the Macintosh environment. There
is no clear distinction between applications and operating system functions.

The definition of an operating system is the set of software components which
provide allocation of system resources. Allocation is provided to application
programs. There should always be a clear distinction between OS and application.

Enough of this, lets get back to comp.arch.

>-- 
>daveo@apple.com                                               David M. O'Rourke


=====
SjB.

My opinions.

deraadt@enme.UCalgary.CA (Theo Deraadt) (06/08/90)

In article <1990Jun6.222126.2888@midway.uchicago.edu>, gft_robert@gsbacd.uchicago.edu writes
>In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes...
>>How about the fact that programmers may have better things to do with their
>>time than warp code to fit into the windowing universe? I realise that on
>>the mac 90% of the programs are 90% user-interface, but that's not always
>>the best way to do things. A compiler, for example, really has no business
>>calling GetNextEvent *ever*.
>
>And if the user wants to interrupt the compilation mid-compile?  You'd better
>have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
>seems the proper way to do this to me.
>
>You may indeed have to change some of your code to run properly on the Mac.
>Or put another way: you may have to change some of your code to put the user
>incomplete control.  The above example as a case in point.

And why are signals (esp. unix type signals) not a correct way to handle this?
Calling GetNextEvent() sounds like polling to me.

So, if I wanted to do a large matrix add, I would have to call GetNextEvent()
every couple of rows perhaps. And where do I put GetNextEvent() in my
compiler? I guess I put it in the parser, and it calls GetNextEvent() every
100th token or something like that. For heavily recursive stuff, does this
not seem to get overly messy?
 <tdr.

SunOS 4.0.3: /usr/include/vm/as.h,  Line 44	| Theo de Raadt
Is it a typo? Should the '_'  be an 's'?? :-)	| deraadt@enme.ucalgary.ca


SunOS 4.0.3: /usr/include/vm/as.h,  Line 44	| Theo de Raadt
Is it a typo? Should the '_'  be an 's'?? :-)	| deraadt@enme.ucalgary.ca

jesup@cbmvax.commodore.com (Randell Jesup) (06/08/90)

In article <1990Jun7.142800.4113@csrd.uiuc.edu> bruner@sp15.csrd.uiuc.edu (John Bruner) writes:
>This is precisely one of my complaints about windowing systems in
>general.  The implicit assumption seems to be that all programs should
>be restructured into big event loops.  An application which doesn't
>call GetNextEvent() or XtAppProcessEvent() or whatever on a regular
>basis is "not well-behaved."  Never mind that the application might
>have some complex long-running task to do.  To operate "properly" it
>must artificially break up its computation into small pieces that can
>be executed in between calls to the event handler.  Theoretically this
>can always be done, but is the added complexity during programming,
>debugging, and maintenance worth it?

	Under Intuition (the Amiga windowing/UI system), an application
can be well behaved without ever calling GetMsg(w->UserPort).  At one level,
you can run in a virtual terminal under a shell and know nothing of windows.
At another level you can open a window to output and use it as you see fit
(like display the results of a fractal) and let Intuition handle all window
events and refresh for you ("smart refresh" - i.e. backing store for hidden
areas).  Or you can get refreshevents, get notified of mouse clicks, movements,
menu selection, gadgets, etc if you want.

	Plus you can segregate a task into a UI process and a computation
process, as has been mentioned.

Disclaimer: I work for Commodore, and love pre-emptive multitasking.
-- 
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, jesup@cbmvax.cbm.commodore.com  BIX: rjesup  
Common phrase heard at Amiga Devcon '89: "It's in there!"

sjc@key.COM (Steve Correll) (06/08/90)

> In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes...
>How about the fact that programmers may have better things to do with their
>time than warp code to fit into the windowing universe? I realise that on
>the mac 90% of the programs are 90% user-interface, but that's not always
>the best way to do things. A compiler, for example, really has no business
>calling GetNextEvent *ever*.

In article <1990Jun6.222126.2888@midway.uchicago.edu>, gft_robert@gsbacd.uchicago.edu writes:
> And if the user wants to interrupt the compilation mid-compile?  You'd better
> have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
> seems the proper way to do this to me.
> 
> You may indeed have to change some of your code to run properly on the Mac.
> Or put another way: you may have to change some of your code to put the user
> in complete control...

I think you both want to be able to interrupt a compilation; you differ on
whether to provide this service within the OS or within the application. Many
operating systems (Unix, for example) provide the service in a fashion that
requires no change to the user's code: in fact, when you port an application
to Unix, the user is in complete control unless you take steps to prevent it.

The Mac approach simplifies the OS. But consider porting a large, compute-bound
Fortran program. In any OS, you will have to understand and convert features
dependent on the operating system, such as subroutines which read single
keystrokes without requiring you to hit the return key. But you will often not
have to understand the inner workings of the program: you can move somebody
else's semiconductor simulator to your machine and use it to model a flip-flop
without becoming an expert on how to write semiconductor simulators.
Ironically, this is the "appliance" paradigm claimed for the Mac itself: you
shouldn't have to understand how to build a microwave oven to reheat pizza.

With the Mac, however, to make the application run in the background and share
cycles, you must understand the control flow of the program (which loops are
executed how often, and how that varies according to the input data you give
it) before you can decide where to insert the GetNextEvent calls. Sometimes
that's not easy.

This discussion should probably decamp from comp.arch.
-- 
...{sun,pyramid}!pacbell!key!sjc 				Steve Correll

mattly@aldur.sgi.com (James Mattly) (06/09/90)

In article <1990Jun7.212351.20426@calgary.uucp>,
deraadt@enme.UCalgary.CA (Theo Deraadt) writes:
> In article <1990Jun6.222126.2888@midway.uchicago.edu>,
gft_robert@gsbacd.uchicago.edu writes
> >In article <:SY35CD@xds13.ferranti.com>, peter@ficc.ferranti.com
(Peter da Silva) writes...
> >>How about the fact that programmers may have better things to do with their
> >>time than warp code to fit into the windowing universe? I realise that on
> >>the mac 90% of the programs are 90% user-interface, but that's not always
> >>the best way to do things. A compiler, for example, really has no business
> >>calling GetNextEvent *ever*.
> >
> >And if the user wants to interrupt the compilation mid-compile? 
You'd better
> >have some way of finding at least this out.  GetNextEvent (or WaitNextEvent)
> >seems the proper way to do this to me.
> >
> >You may indeed have to change some of your code to run properly on the Mac.
> >Or put another way: you may have to change some of your code to put the user
> >incomplete control.  The above example as a case in point.
> 
> And why are signals (esp. unix type signals) not a correct way to
handle this?
> Calling GetNextEvent() sounds like polling to me.
> 
> So, if I wanted to do a large matrix add, I would have to call GetNextEvent()
> every couple of rows perhaps. And where do I put GetNextEvent() in my
> compiler? I guess I put it in the parser, and it calls GetNextEvent() every
> 100th token or something like that. For heavily recursive stuff, does this
> not seem to get overly messy?
>  <tdr.
> 
Exactly!
	Why should the programmer belly-ache about putting the GetNextEvent()
into their program when the compiler could do a simmilar job.  Consider using
the pragma() directive in C compilers, (or {$switch+/-} for pascal), to turn
on or off an automatic placement of GetNextEvent, or some equivalent call after
every n lines of code.  Or inside a loop (not a time critical one!).  Or 
inside the basic blocks (for all those compiler fans out there!).  This
would seem to release the programmer from putting a GetNextEvent every so often
(which seems to be the basis of everyones complaint) and still gain the
benefits
from calling it.

	This approach wins for the programmer who doesn't want to "break" his
train of thought while writing a peice of code.  It also wins for the Mac OS
which lets the programmer decide when a context switch would be a good idea
(IMHO, the best reason for Cooperative Multitasking).

	Perhaps Apple could set up a variant of GetNextEvent (or WaitNextEvent),
which is optimized for this specific use.  Also perhaps the compiler could have
options to specify the parameters which are used for the automatic calls. There
should probobly be a function which is called if the GetNextEvent notices a 
UpdateEvt, or a Command-. combination, again configurable through pragma's so
that the response function could be changed in different sections of code.

	Personaly I prefer the idea of "Cooperative" Multitasking (although I
would like to have seperate address spaces) because it eliminates many hangups
for OS designers to worry about. (If the application developer can't
write code,
he shouldn't be programming :-) Preemptive Multitasking seems to rape processes
if they don't behave.  Preemptive Multitasking was 'designed' when I/O was a
good thing for the scheduler.

	Any way seemed like a good idea when I typed it up, enjoy.

--------------------
	James Mattly (mattly@aldur.esd.sgi.com)
	You actualy think that SGI listens to me?  Wow what a concept!
----------------

jdarcy@encore.com (Mostly Useless) (06/09/90)

mattly@aldur.sgi.com (James Mattly):
>Consider using
>the pragma() directive in C compilers, (or {$switch+/-} for pascal), to turn
>on or off an automatic placement of GetNextEvent, or some equivalent call
>after every n lines of code.  Or inside a loop (not a time critical one!).  Or
>inside the basic blocks (for all those compiler fans out there!).  This
>would seem to release the programmer from putting a GetNextEvent every so
>often (which seems to be the basis of everyones complaint) and still gain the
>benefits from calling it.

What an amazingly half-baked idea!  Oh, sorry. . .I just insulted the bakers
of the world.  What you're talking about is placing an absolutely impossible
task in front of the compiler writers (who certainly have enough to worry
about already) in hopes of making life just a *tiny* bit easier for OS and
application developers.  There is NO WAY that the compiler can have any but
the foggiest idea concerning which points are "appropriate" for a possible
context switch.  The application designer will certainly have a much better
idea, or the OS can allow the user to choose, but the compiler is absolutely
the worst place to attempt a solution.

This brings me to my next point, which I would have let slide if I weren't
posting already.  I've heard a lot from the Mac interface folks saying how
"the user is in control".  However, long experience with the Mac has taught
me that the application can do pretty much what it damn well pleases.  It
would take me all of two minutes to write a Mac program that will spin in
an infinite loop without calling GetNextEvent, forcing the user to reboot.
Hell, I'll save them the trouble; I can just as easily write a program that
immediately reboots the machine.  All of this is of course done without
special privileges; the Mac HAS NO IDEA of privileges.  With the application
in such complete control, I don't see how anyone can say this gives more
power to the user.  My only guess is that the people saying such things
have been fortunate enough to use well-behaved Mac applications and have
never lifted a finger in an effort to write one.

Take *any* preemptive multitasking system as a counterexample (UNIX, Amiga,
VMS, whatever).  The user wants to stop a compile or other task, so they
do something to tell the OS.  The OS turns around and basically yanks the
application's cord without so much as a please or thank you.  "Sorry, bub.
Yer outta here."  Thus can even the most ill-behaved applications be tamed.
If you ask me - which you didn't - this is the way to keep control in the
users' hands.
--

Jeff d'Arcy, Generic Software Engineer - jdarcy@encore.com
      Nothing was ever achieved by accepting reality

golding@saturn.ucsc.edu (Richard A. Golding) (06/09/90)

In article <jdarcy.644899889@zelig> jdarcy@encore.com (Mostly Useless) writes:
>mattly@aldur.sgi.com (James Mattly):
>>Consider using
>>the pragma() directive in C compilers, (or {$switch+/-} for pascal), to turn
>>on or off an automatic placement of GetNextEvent, or some equivalent call
>>after every n lines of code.  Or inside a loop (not a time critical one!).  Or
>>inside the basic blocks (for all those compiler fans out there!).  This
>>would seem to release the programmer from putting a GetNextEvent every so
>>often (which seems to be the basis of everyones complaint) and still gain the
>>benefits from calling it.
>
>... What you're talking about is placing an absolutely impossible
>task in front of the compiler writers (who certainly have enough to worry
>about already) in hopes of making life just a *tiny* bit easier for OS and
>application developers.  There is NO WAY that the compiler can have any but
>the foggiest idea concerning which points are "appropriate" for a possible
>context switch.  The application designer will certainly have a much better
>idea, or the OS can allow the user to choose, but the compiler is absolutely
>the worst place to attempt a solution.

In fact some recent research  has shown just the opposite:  that
compile- time assistance is a very *good* thing for operating system
design.  The Emerald system (University of Washington) gets a lot of
compiler assistance, and gets significant speedup as a result.  More to
the point of this newsgroup, the SOAR (Smalltalk on a Risc, UC
Berkeley) processor makes assumptions about code behaviour to allow a
simpler interrupt and context-switching mechanism.  By only performing
context switches at method invocations, things got easier (it's been a
couple years since I read Ungar's dissertation so the details are a bit
hazy.)  

So I think it's rather hasty to say that compiler assists like this
are unreasonable... people are actually doing such things.

-richard
--
-----------
Richard A. Golding, Crucible (work) and UC Santa Cruz CIS Board (grad student)
Internet:  golding@cis.ucsc.edu   Work: {uunet|ucscc}!cruc!golding
Post: Baskin Centre for CE & IS, Appl. Sci. Bldg., UC, Santa Cruz CA 95064

peter@ficc.ferranti.com (Peter da Silva) (06/09/90)

In article <8767@odin.corp.sgi.com> mattly@aldur.sgi.com (James Mattly) writes:
[ have the compiler insert GetNextEvent ]
> every n lines of code.  Or inside a loop (not a time critical one!).

What if the time-critical loop is long-running? Like, in a ray-tracer? It
might be reasonable to check after each scanline, but every pixel is probably
too often. A high-resolution tracer can take a significant amount of time
to do that...

Sure, there are tradeoffs. But how is the compiler to figure them out?

> 	This approach wins for the programmer who doesn't want to "break" his
> train of thought while writing a peice of code.  It also wins for the Mac OS
> which lets the programmer decide when a context switch would be a good idea
> (IMHO, the best reason for Cooperative Multitasking).

But the programmer *isn't* deciding when a context switch would be a good idea!
The compiler is.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

cory@three.MV.COM (Cory Kempf) (06/11/90)

JONESD@kcgl1.eng.ohio-state.edu (David Jones) writes:

>Do Macintosh keyboards generate interrupts (talking about hardware in
>comp.arch, how strange :-)?  The software environment sure wants to behave
>as if the I/O devices are strictly polled.

No.  Keyboards, at least those on the Mac SE, Mac II and later systems,
are on a bus.  They get polled.  As does the mouse.  Or trackball.

+C
-- 
Cory Kempf				I do speak for the company (sometimes).
Three Letter Company						603 883 2474
email: cory@three.mv.com, harvard!zinn!three!cory

cory@three.MV.COM (Cory Kempf) (06/11/90)

seanf@sco.COM (Sean Fagan) writes:

>In article <355@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes:
>>				  Remember: The USER is in control.

>And that is why the MacOS is not a "true" OS.  Because the *USER* (actually,
>the application) is in control, not the OS.

Your statement is meangingless.  MacOS defines a way to manage the 
resources of the system: Memory, Disk, Screen, Keyboard, Mouse, CPU,
etc.  Whether or not it the "Best" *POLICY* is a matter for debate.  However,
it does provide for management of said resources.  Ergo, it is an OS.

And until OS writers start to give away hardware, I will continue to
prefer that I control the hardware that I buy, not somebody else.

+C
-- 
Cory Kempf				I do speak for the company (sometimes).
Three Letter Company						603 883 2474
email: cory@three.mv.com, harvard!zinn!three!cory

daveh@cbmvax.commodore.com (Dave Haynie) (06/12/90)

In article <1682@mcrware.UUCP> jejones@mcrware.UUCP (James Jones) writes:
>In article <26437.266ae612@vaxb.acs.unt.edu> ac08@vaxb.acs.unt.edu writes:
>>And those preemptive multitasking systems
>>suck RAM like nobody's business...

>Gee, that would be news to those who have used OS-9/6809 Level One (designed
>to work in a single 64K address space) for years.  (Admittedly, when I composed
>a mail reply (which bounced, alas), I was on a Level Two system, but even with
>windowing and a 96K RAM disk, 512K is fairly comfortable.)  For that matter, I
>wouldn't call OS-9/68K a memory hog...

In fact, a non-preemptive multitasking system, like Mac's Multifinder or 
MicroSoft's Windows is extremely likely to take more memory than a well 
designed preemptive system.  The only reason the two aforementioned operating
systems aren't preemptive, despite how their proponents will claim that
"Cooperative multitasking is superior", is for compatibility with programs
written for the the non-multitasking operating systems they replace.  Such
programs, and the underlying operating systems, were never designed for
preemption.  So they can't take a context swap anywhere, or they'll end up
having some system global context information clobbered.  And the reason
they need more memory than a well designed preemptive system are these globals.
For every context swap, all kinds of this global context information must be
stored somewhere.  The preemptive system need only store CPU context 
information -- program counter and various registers.  

Certainly it's possible to build a preemptive system that takes up too much
memory -- UNIX System V Release 4 and OS/2 are two good examples of this.  
The AmigaOS will also run with the windowing system, RAM disk, etc. in 512K.
Or, for that matter, 146 Megabytes (the most that can currently be plugged 
into an Amiga computer with existing memory cards).

>	James Jones


-- 
Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
	"I have been given the freedom to do as I see fit" -REM

jgk@demo.COM (Joe Keane) (06/12/90)

I think it's interesting to compare the Macintosh OS to RSTS.  The hardware is
pretty comparable; if anything the Macintosh is more powerful.  RSTS has a
small kernel (unlike Unix), it has shared libraries, and it's pretty snappy.
Even given that the Macintosh doesn't have an MMU, there's no reason you can't
do something similar.  Don't get me wrong, i don't think RSTS is great, but it
shows what people were doing a decade ago on wimpy (by today's standards)
hardware.

Certainly the Macintosh's user interface is a step forward (never mind Apple
took it from Xerox).  But the OS is a giant leap into the 50s.  Everything we
know about interrupts and scheduling is thrown out the window.  Remember what
RSTS stands for: resource sharing, time sharing.  If Apple named the Macintosh
OS that way it'd be RSRS.  My question is, why do PC companies feel compelled
to give us such crappy operating systems?

You know, it's too bad you can't hit control-T to see what your Macintosh is
trying to do; of course one reason is that the stupid thing doesn't have a
control key.  Now that's something that makes the user feel in control, even
if it doesn't actually do anything.

P.S.  Don't tell me about Intuition.  I know about it and like it a lot.

dankg@volcano.Berkeley.EDU (Dan KoGai) (06/12/90)

In article <2922@demo.COM> jgk@osc.COM (Joe Keane) writes:


>Certainly the Macintosh's user interface is a step forward (never mind Apple
>took it from Xerox).  But the OS is a giant leap into the 50s.  Everything we
>know about interrupts and scheduling is thrown out the window.  Remember what
>RSTS stands for: resource sharing, time sharing.  If Apple named the Macintosh
>OS that way it'd be RSRS.  My question is, why do PC companies feel compelled
>to give us such crappy operating systems?

	Nah, Mac OS is RSED, resource sharing, event driven.  And event driven
OS is rather newer concept.  RSTS was created under such environment that
computer is an expensive gadget and must be shared.  Macintosh, on the other
hand is a child of personal computer:  100% CPU time is yours (or your
sessions, to be more exact).  You are comparing Apple to Orange.

>You know, it's too bad you can't hit control-T to see what your Macintosh is
>trying to do; of course one reason is that the stupid thing doesn't have a
>control key.  Now that's something that makes the user feel in control, even
>if it doesn't actually do anything.

	While you understand computer, you don't understand Macintosh.
Control-T?  What a joke.  Mac already has Command-Q to quit and most
applications use Command-. to halt command.  It has control key today (at least
my SE + Extended keyboard does) but only for telecom software and Quickey.
Control Key is a no-no concept for Macintosh:  That makes programmer's works
hard because you have to program a bullet-proof software.  But it obviously
benefitted users.  The best thing Macintosh did was telling us computers are
for users, not for programmer who program for programmer's sake.
	And if you have Programmer's key INIT, you can easily interrupt with
ADB keyboard.  And if you have TMON, you can use degubber with window.  Now
tell me why does Macintosh need Control-T|C (But I personally think Control-
Z for UNIX is nifty).

----------------
____  __  __    + Dan The Mac Bigot
    ||__||__|   + E-mail:	dankg@ocf.berkeley.edu
____| ______ 	+ Voice:	+1 415-549-6111
|     |__|__|	+ USnail:	1730 Laloma Berkeley, CA 94709 U.S.A
|___  |__|__|	+	
    |____|____	+ "What's the biggest U.S. export to Japan?" 	
  \_|    |      + "Bullshit.  It makes the best fertilizer for their rice"

edwardj@microsoft.UUCP (Edward JUNG) (06/13/90)

In article <1990Jun7.212351.20426@calgary.uucp> deraadt@enme.UCalgary.CA (Theo Deraadt) writes:
>In article <1990Jun6.222126.2888@midway.uchicago.edu>, gft_robert@gsbacd.uchicago.edu writes
[...]
>
>So, if I wanted to do a large matrix add, I would have to call GetNextEvent()
>every couple of rows perhaps. And where do I put GetNextEvent() in my
>compiler? I guess I put it in the parser, and it calls GetNextEvent() every
>100th token or something like that. For heavily recursive stuff, does this
>not seem to get overly messy?
> <tdr.
>
>SunOS 4.0.3: /usr/include/vm/as.h,  Line 44	| Theo de Raadt
>Is it a typo? Should the '_'  be an 's'?? :-)	| deraadt@enme.ucalgary.ca

The major problem I would say with GetNextEvent() is that the application cannot
statically determine the grain at which the polling should be performed.  The
frequency at which the user should receive time to get fast response is pretty
fixed at about 1/10th to 1/15th of a second.  The frequency at which code instructions
are executed, however, is NOT fixed.  Therefore it is often difficult to guarantee
good design in the polled system that must be portable across machines of different
speeds.

To design for the worst-case may sacrifice performance in the better cases since
the api call is not free (and as a matter of fact, trusts other processes to be
designed for the worst case as well).  Worse, the perceived frequency is more
directly tied to the number of processes executing.

To be smart about this, an application would have to clock itself against a real
time clock and make a scheduler of its own.  Every application would have to do
this.  I think that is a compelling argument to place it into the OS domain since
it involves arbitrating a global resource.

GetNextEvent() or Yield() makes sense as a "hint", but not as the foundation for
multitasking, and especially not as an argument for user-centric programming.

Edward Jung
Microsoft Corp.

"I do not speak for the company"

jay@argosy.UUCP (Jay O'Conor) (06/13/90)

In article <12539@cbmvax.commodore.com> daveh@cbmvax (Dave Haynie) writes:

>For every context swap, all kinds of this global context information must be
>stored somewhere.  The preemptive system need only store CPU context 
>information -- program counter and various registers.  

You're not serious, are you?  You don't really mean to imply that
"program counter and various registers" are all there is to a process'
context?  I'll grant that the Mac O/S is currently somewhat unweildy in
identifying everything that constitutes a process context, but every O/S
I'm aware of has more than just processor registers that define a
process context.
Whether the multitasking system is preemptive or cooperative has no
effect on what a process' context is.  Preemptive multitasking systems
can have just as much context information as a cooperative multitasking
system - it's just that the context must be private to the process in a
preemptive scheduler, while much of the context can be global with a
cooperative scheduler.

Jay O'Conor
jay@maspar.com

cory@three.MV.COM (Cory Kempf) (06/13/90)

jdarcy@encore.com (Mostly Useless) writes:

>This brings me to my next point, which I would have let slide if I weren't
>posting already.  I've heard a lot from the Mac interface folks saying how
>"the user is in control".  However, long experience with the Mac has taught
>me that the application can do pretty much what it damn well pleases.  It
>would take me all of two minutes to write a Mac program that will spin in
>an infinite loop without calling GetNextEvent, forcing the user to reboot.
>Hell, I'll save them the trouble; I can just as easily write a program that
>immediately reboots the machine.

But wait a moment to consider... why would a user buy such a broken program?
And, assuming that the user has purchased the program (or otherwise acquired
it), and has found out about this antisocialness, why would they continue to
run it?  Personally, I would send it back.

Which brings up a point that *I* was going to let slide... I made a post
suggesting that a well written *USER* oriented program would check for 
events frequently (btw, this is just as true for Unix/X as it is for 
MacOS).  No sooner had the post cleared my modem then several people
posted such fine examples of *USER* oriented programs as Ray Tracers and
Compilers.  Give me a break!  The user interface to 90% of the compilers
out there has not changed since the days of punch cards!!!  Most would not
notice the difference between running on a Mac and running (walking?) on one
of the old IBM batch mode thingies.  And I have yet to see a shrink wrapped
ray trace program.  A nice toy for academics, but a waste of disk space 
to the avarage user.  And they do not have much of a UI either.  There is 
no real qualitative improvement over:

	$JOB RAYTRCE
	$INPUT = FOO.RT
	$OUTPUT = CONSOLE

Something else to consider: Preemptive Multitasking is a time slicing POLICY.
it is not the only one.  It does, however, have the distinction of being the
easiest for an application programmer / compiler writer to deal with, as well
as giving the best OVERALL average wait time in the general case.  It is by no
means the best policy in ALL cases.

And one further thing: people who buy computers don't give a ****** about
what hard work programmers have to do.  They couldn't care less where the 
line is drawn between OS and Application.  A well written set of cooperating
tasks will run at least as fast as a set of preemptively scheduled tasks.
On a GUI based system, they will run (as perceived by the user) better.  The
Current Application will complete actions for the user and then, while the
user is thinking (i.e. the CPU is idle) do other things.  Swapping at event 
time when there are no pending events is the best time for such things.  There
is no way that a purely preemptive MT system can always preemt at such times.

>						  With the application
>in such complete control, I don't see how anyone can say this gives more
>power to the user.  My only guess is that the people saying such things
>have been fortunate enough to use well-behaved Mac applications and have
>never lifted a finger in an effort to write one.

Consider: the USER is PAYING for the PROGRAMMER to do things right.  They
are always free to find a programmer (company, actually) that does.

>				  The OS turns around and basically yanks the
>application's cord without so much as a please or thank you.  "Sorry, bub.
>Yer outta here."  Thus can even the most ill-behaved applications be tamed.
>If you ask me - which you didn't - this is the way to keep control in the
>users' hands.

It would seem that Apple (finally) agrees with you... In System 7, the user
can press CMD-Option-ESC and terminate a task.

+C
-- 
Cory Kempf				I do speak for the company (sometimes).
Three Letter Company						603 883 2474
email: cory@three.mv.com, harvard!zinn!three!cory

sbrooks@beaver..UUCP (Steve Brooks) (06/13/90)

In article <1990Jun12.163321.676@agate.berkeley.edu> dankg@volcano.Berkeley.EDU (Dan KoGai) writes:
>In article <2922@demo.COM> jgk@osc.COM (Joe Keane) writes:
>
>
>>Certainly the Macintosh's user interface is a step forward (never mind Apple
>>took it from Xerox).  But the OS is a giant leap into the 50s.  Everything we
>>know about interrupts and scheduling is thrown out the window.  Remember what

Well Put.

>
>computer is an expensive gadget and must be shared.  Macintosh, on the other
>hand is a child of personal computer:  100% CPU time is yours (or your
>sessions, to be more exact).  You are comparing Apple to Orange.

That's perfectly fine if you are 100% of the users. This whole discussion
started because of the limited ability of the Macintosh to multitask. You
can't give 100% CPU time to all "sessions".

>
>>You know, it's too bad you can't hit control-T to see what your Macintosh is
>>trying to do; of course one reason is that the stupid thing doesn't have a
>>control key.  Now that's something that makes the user feel in control, even
>>if it doesn't actually do anything.
>
>	While you understand computer, you don't understand Macintosh.

Is this discussion now going to change from "MacOS is not an OS" to "Macintosh
is not a computer" ??

>Control Key is a no-no concept for Macintosh:  That makes programmer's works
>hard because you have to program a bullet-proof software.  But it obviously

So Macintosh programmer's don't have to write bullet-proof software ?? No
wonder the Mac is having trouble gaining acceptance.

>benefitted users.  The best thing Macintosh did was telling us computers are
>for users, not for programmer who program for programmer's sake.
>	And if you have Programmer's key INIT, you can easily interrupt with
>ADB keyboard.  And if you have TMON, you can use degubber with window.  Now

Yeah, right. The Mac is "for users", but to really accomplish anything you
need TMON.


Let's get back to comp.arch
=====
SjB.

My Opinions.

bader+@andrew.cmu.edu (Miles Bader) (06/14/90)

> >computer is an expensive gadget and must be shared.  Macintosh, on the other
> >hand is a child of personal computer:  100% CPU time is yours (or your
> >sessions, to be more exact).  You are comparing Apple to Orange.

> >       While you understand computer, you don't understand Macintosh.

> >Control Key is a no-no concept for Macintosh:  That makes programmer's works
> >hard because you have to program a bullet-proof software.  But it obviously
> >benefitted users.  The best thing Macintosh did was telling us computers are
> >for users, not for programmer who program for programmer's sake.

Look, it won't be a real OS *OR* a real computer until you people start
putting articles in front of its name.

	Retching on my keyboard,

		-Miles

ac08@vaxb.acs.unt.edu (ac08@vaxb.acs.unt.edu (C. Irby)) (06/14/90)

1.676@agate. <yargr_i00vsaesk7m3@andrew.cmu.edu>
Followup-To: .676@agate. <yargr_i00vsaesk7m3@andrew.cmu.edu>

Lines: 21

In article <YaRgR_i00VsaESK7M3@andrew.cmu.edu>, bader+@andrew.cmu.edu (Miles Bader) writes:
>> >computer is an expensive gadget and must be shared.  Macintosh, on the other
>> >hand is a child of personal computer:  100% CPU time is yours (or your
>> >sessions, to be more exact).  You are comparing Apple to Orange.
> 
>> >       While you understand computer, you don't understand Macintosh.
> 
> Look, it won't be a real OS *OR* a real computer until you people start
> putting articles in front of its name.
> 
> 	Retching on my keyboard,
> 
> 		-Miles

Yeah, like THE UNIX, or THE VMS...

or like THE MS-DOS...

Hoping you have a sponge... :)

C Irby

peter@ficc.ferranti.com (Peter da Silva) (06/15/90)

In article <369@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes:
> Which brings up a point that *I* was going to let slide... I made a post
> suggesting that a well written *USER* oriented program would check for 
> events frequently... then several people
> posted such fine examples of *USER* oriented programs as Ray Tracers and
> Compilers.  Give me a break!

No. I won't give you a break.

	a) The first point you missed is that not all programs are
	   user-oriented (or, to be more precise, well suited to being
	   implemented as an editor). Computationally intensive programs
	   are the most common example of this, but what about such
	   things as print spoolers?

	b) The second point you missed is that even some editor-type
	   programs *are* computationally intensive. Frequent calls to
	   GetNextEvent will slow down your massive spreadsheet recalcs.
	   CAD programs with decent renderers spend a lot of time doing
	   3-d clipping calculations.

> And I have yet to see a shrink wrapped
> ray trace program.

Sculpt-3d for the Amiga was the first shrink-wrapped ray-tracer that I
know of, but the equally computationally intensive Videoscape-3d, which is
a scanline renderer, came out about the same time. There are now quite a
few programs of this type (Turbo Silver, Caligari, etc...).

Just because you haven't seen a class of programs doesn't mean its members
comprise the null set.

> A nice toy for academics, but a waste of disk space 
> to the avarage user.  And they do not have much of a UI either.

Sure they do. You have to get the object in somehow. Sculpt-3d has quite
a sophisticated CAD-style front end. Videoscape uses a set of separate
programs with varying user-interfaces to perform the same task.

> A well written set of cooperating
> tasks will run at least as fast as a set of preemptively scheduled tasks.

And...

> On a GUI based system, they will run (as perceived by the user) better.

These are completely unsupported assertions, and demonstrably wrong. The
Amiga GUI doesn't have nearly the polish of the Mac (though the new O/S
is much nicer than the old one), but it's also had a hell of a lot less
development money put into it. On the subject of scheduling, however, it
is light-years ahead of the Mac. Even under a heavy task load the stock
Amiga 1000 is faster, more consistent, and more responsive than even a
Mac-II with a couple of active tasks under Multifinder.
-- 
Have you hugged your wolf today?
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>

daveh@cbmvax.commodore.com (Dave Haynie) (06/15/90)

In article <369@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes:

>And one further thing: people who buy computers don't give a ****** about
>what hard work programmers have to do.  They couldn't care less where the 
>line is drawn between OS and Application.  

While that's ture in an ideal world, an ideal world this isn't.  If a certain
class of program or a certain program feature is too difficult for most
programmers to code correctly, then most programmers won't code it correctly.
You would hope that the ones who do code it correctly would be successful in
the marketplace, but another reality is that "The Technical Best" often has a
very hard time competing with "The Best Salesman" or "The Largest Company".

>A well written set of cooperating tasks will run at least as fast as a set 
>of preemptively scheduled tasks. On a GUI based system, they will run (as 
>perceived by the user) better.  

Not likely.  A cooperative set of tasks can't always swap often enough to 
keep things going interactively.  A preemptive system can always swap 
quickly enough, since swapping grain is independent of the applications
running.

>Swapping at event time when there are no pending events is the best time for
>such things.  There is no way that a purely preemptive MT system can always
>preemt at such times.

Both types of systems can use interrupts to signal the system of a user's
intent to communicate -- you don't have to wait for a swap to notice the user.
A preemptive system designer and tuned for interactive use can always appear
to swap at points of interactivity.  Tasks can swap several times a second,
far faster than a user can interact.

Not that all preemptive systems _are_ designed and tuned for user interaction.
But some are.  For example, the Amiga OS.  Keyboard and mouse events cause
interrupts which signal high level priority tasks, for example, the Intuition
task, which manages user events for the GUI.  If you grab the mouse, Intuition
will wake up on the next task swap (a fraction of a second) and process 
movements or keyclicks, even while ray traces or disk activity is going on.
If it finds an event that a program is interested in, it'll signal that task,
which otherwise sits on a wait queue, consuming no CPU time.  

But this system is designed for single user interactivity.  You may have found
the Mac OS much more responsive than UNIX systems like NeXT or Sun, and 
assumed something about preemptive operating systems.  In fact, that's got
absolutely nothing to do with preemption vs. non-preemption, and everything
to do with the design of the Mac OS vs. UNIX.  The Mac OS, even in it's non
multitasking form, is designed (maybe overdesigned) for user interaction.
The UNIX OS, in its basic form, was designed as a multiuser OS.  UNIX
implementers can do a good or bad job of tuning this for the interactive
single user, but to date, I've never seen one as responsive as an Amiga.  For
that matter, using a reasonably fast Mac, even under multifinder, is a downer
if you're used to an Amiga.  And that's not a stab at the Mac's speed -- the
IIcx I use in my lab is plenty quick.  But things block all over the place.
I can't move windows around while the disk is going or a dialog box is up.
I bet when Apple moves to a preemptive multitasking OS, this very same machine
will get much "snappier" to me, the interactive user.

>Cory Kempf				I do speak for the company (sometimes).
>Three Letter Company						603 883 2474
>email: cory@three.mv.com, harvard!zinn!three!cory


-- 
Dave Haynie Commodore-Amiga (Amiga 3000) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
	"I have been given the freedom to do as I see fit" -REM

martin@cbmvax.commodore.com (Martin Hunt) (06/16/90)

In article <575@argosy.UUCP> jay@idiot.UUCP (Jay O'Conor) writes:
>In article <12539@cbmvax.commodore.com> daveh@cbmvax (Dave Haynie) writes:
>
>>For every context swap, all kinds of this global context information must be
>>stored somewhere.  The preemptive system need only store CPU context 
>>information -- program counter and various registers.  
>
>You're not serious, are you?  You don't really mean to imply that
>"program counter and various registers" are all there is to a process'
>context?  I'll grant that the Mac O/S is currently somewhat unweildy in
>identifying everything that constitutes a process context, but every O/S
>I'm aware of has more than just processor registers that define a
>process context.

He's serious.  On multitasking systems, a context swap is usually
just defined by the CPU (and possibly MMU) registers.


>Whether the multitasking system is preemptive or cooperative has no
>effect on what a process' context is.  Preemptive multitasking systems
>can have just as much context information as a cooperative multitasking
>system - it's just that the context must be private to the process in a
>preemptive scheduler, while much of the context can be global with a
>cooperative scheduler.

I've never seen a preemptive multitasking system with as much context
information as a cooperative system, but anything's possible.  The reason
is that cooperative multitasking systems are usually just a kludge hacked
on to basically a single-tasking OS.  To swap processes on a Mac, you
have to take a snapshot of the current system status and save that each
time (that darned global context you mentioned).  This means cooperative
multitasking systems are always at least as slow as preemptive systems,
usually much slower.

A cooperative multitasking system could be designed with as little
process context as typical preemptive systems, but if you go to that
much trouble designing an OS, it wouldn't make any sense to cripple 
it by not writing a decent scheduler.

-- 
Martin Hunt                     martin@cbmvax.commodore.com
Commodore-Amiga Engineering     {uunet|pyramid|rutgers}!cbmvax!martin

peter@ficc.ferranti.com (Peter da Silva) (06/17/90)

In article <12766@cbmvax.commodore.com> martin@cbmvax (Martin Hunt) writes:
> A cooperative multitasking system could be designed with as little
> process context as typical preemptive systems, but if you go to that
> much trouble designing an OS, it wouldn't make any sense to cripple 
> it by not writing a decent scheduler.

Counterexample, most early Forth schedulers were polled multitaskers. They
were used for real-time, and context was generally just a *subset* of the
processor registers. But of course this is hardly the sort of comparison
the Macintosh people should take as complimentary.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>