[comp.sys.att] Question about windows and processor time

erict@flatline.UUCP (j eric townsend) (02/20/89)

Here's the test:

Run a lot of stuff on your system at once, say an expire and a big make
(pcomm or elm work well).
Set up 4 or 5 windows to page through.
Using the new wmgr from THE STORE -- the one that lets you hot key through
the windows -- page through the windows at about one per second.

Does your hard drive *pause* while you go from one window to another?
Mine does.

I thought Unix was multitasking, etc etc.  I didn't think it would
have to stop HDU access just to change windows.  Am I missing something
vital>


-- 
J. Eric Townsend | "This is your brain. This is your brain on drugs.  This
 uunet!sugar!flatline!erict | is your brain on drugs with spam and toast."
bellcore!texbell!/            511 Parker #2    |EastEnders Mailing List:
BITNET: cosc5fa@uhnix1.BITNET Houston,Tx,77007 |eastender@flatline.UUCP

jbm@uncle.UUCP (John B. Milton) (02/22/89)

In article <356@flatline.UUCP> erict@flatline.UUCP (j eric townsend) writes:
>
>Here's the test:
>
>Run a lot of stuff on your system at once, say an expire and a big make
>(pcomm or elm work well).
>Set up 4 or 5 windows to page through.
>Using the new wmgr from THE STORE -- the one that lets you hot key through
>the windows -- page through the windows at about one per second.
>
>Does your hard drive *pause* while you go from one window to another?
>Mine does.
>
>I thought Unix was multitasking, etc etc.  I didn't think it would
>have to stop HDU access just to change windows.  Am I missing something
>vital>

Nothing special going on here. Some of the paging that immediately precedes
the pause is the window that is about to be displayed coming in from disk
to the "work screen" insides the window driver. At this time, the window
driver is building a picture of how the screen should look. The pause you see
is the window manager copying the current view to the video ram. Nothing else
happens while this is going on, so it appears to pause hard disk access.






John

-- 
John Bly Milton IV, jbm@uncle.UUCP, n8emr!uncle!jbm@osu-cis.cis.ohio-state.edu
(614) h:294-4823, w:764-2933;  Got any good 74LS503 circuits?

paulr@prapc2.pra.COM (Paul Raulerson) (02/22/89)

In article <356@flatline.UUCP> erict@flatline.UUCP (j eric townsend) writes:
>
>Set up 4 or 5 windows to page through.
>Using the new wmgr from THE STORE -- the one that lets you hot key through
>the windows -- page through the windows at about one per second.
>
>Does your hard drive *pause* while you go from one window to another?
>Mine does.
>
>I thought Unix was multitasking, etc etc.  I didn't think it would
>have to stop HDU access just to change windows.  Am I missing something
>vital>


Unix is *not* multi-tasking, it merely appears to be so.
Unix  really does a little magic trick (at least on the 3B1
as well as most other Unix machines) called "time slice" or
"Processor sharing".  It is probable that nothing truely
"paused",  just so much processor attention was paid to your
context switch that nothing did enough work to need to
access the disk again for a second or so.  This happens on
every Unix machine I have ever encountered.  Second, if your
machine does not have plenty of memory, something may
actually have "swapped" out of processor space when you
changed windows.   Thirdly, jobs in the background are
usually ran at lower priority than jobs in the foreground,
so Unix stole processor cycles from the compiles you had
running to execute your window shift.   It could have been
any one of these three reasons, or any combination of them.

Now as to the wmgr fro the STORE... is it pd and where could
I get a copy?  This windowing interface is driving me batty;
I am too used to Sun interfaces now.  I would love to have
about 4 virtual consoles though.  Anyone cooked up a
replacement for the thingamboob yet that can handle that?

Paul


-- 
Paul Raulerson   (Unisys | PR&Associates)   | When you get down to it,   | 
Domain: paulr@pra.COM  paulr@ls.COM         | MS/DOS and OS/2 are just   |
Uucp: rutgers!prapc2!paulr  CIS: 71560,2016 | poor imitations of the real|
Voice: 1+215-275-5983       BIX: paulr      | thing ...                  |

randy@cctb.mn.org (Randy Orrison) (02/23/89)

In article <356@flatline.UUCP> erict@flatline.UUCP (j eric townsend) writes:
| Using the new wmgr from THE STORE -- the one that lets you hot key through
| the windows -- page through the windows at about one per second.

The 'new wmgr'?  The stock wmgr on my 3.51 system lets me hot key
through the windows with Shift-Suspend.  What does the new one do that
mine doesn't?

	-randy
-- 
Randy Orrison - Chemical Computer Thinking Battery - randy@cctb.mn.org
(aka randy@{umn-cs.uucp, ux.acss.umn.edu, umnacvx.bitnet, garnet.uucp})
	"Blow a lawyer to pieces / It's the obvious way
	 Don't wait for a thesis / Do it today"		- Al Stewart

jr@amanue.UUCP (Jim Rosenberg) (03/05/89)

In article <356@flatline.UUCP> erict@flatline.UUCP (j eric townsend) writes:
>Does your hard drive *pause* while you go from one window to another?
>Mine does.
>
>I thought Unix was multitasking, etc etc.  I didn't think it would
>have to stop HDU access just to change windows.  Am I missing something
>vital>

Another response to this article suggested UNIX "wasn't multitasking" but used
the "sleight-of hand called time-slicing".  (Quotes approximate.)  This poster
may have understood correctly but sure didn't explain it.

Yes, UNIX is definitely multitasking -- but only at the *USER* level.  The
*kernel* is not multithreaded or multitasked in any way.  A UNIX process is
a twin-headed beast:  the process executing in user mode and the process
executing in kernel mode.  A process executing in user mode can be preempted
by an interrupt, but it's my understanding that a process executing in kernel
mode *IS NOT PREEMPTED* -- it must voluntarily give up the CPU.  (Obviously
an interrupt handler is typically invoked at the hardware level, so an
interrupt handler *may* preempt a process in kernel mode, but this is not the
same thing as a context switch!  I.e. an interrupt handler may respond to the
disk controller while a process is executing in the window driver, but when
the interrupt handler returns the window driver will resume and the disk
driver which is to consume the results of the interrupt from the controller
will not be scheduled until the window driver yields.)  All drivers
execute in kernel mode.  When you switch windows the code that does this is
part of the window driver, & thus is definitely executing in kernel mode.  A
piggy driver can definitely shut out other drivers if it's badly written.
comp.unix.microport has been the site of some quite vociferous complaints of
this kind from time to time -- though those problems may have been cleared up
by now.

When you get to talking about drivers and yielding the CPU in kernel mode and
such topics you pretty soon get to the awful Dirty Little Secret of UNIX:  The
UNIX kernel has absolutely NOTHING in the way of internal adult mutual
exclusion mechanisms.  There are only two methods of mutual exclusion that I
know about inside the kernel:  (1) Just plain being careful: reducing the
vulnerability to race conditions by knowing what kernel data structures you're
impacting; (2) masking out interrupts.  Yes boys and girls, this awful thing
is true.  The kernel has no semaphores.  The kernel has no message passing.
All those nifty things you learn in operating systems classes don't happen
inside the UNIX kernel.  A driver writer has to know the interrupt structure
across THE WHOLE MACHINE and when masking interrupts is necessary and when it
isn't.

Of course various rewrites of UNIX don't suffer this opprobrium.  Mach does
have kernel threads.  So will System V.5.  What the internal structure of the
kernel under OSF/ix (or whatever it's called) will be I haven't heard.
-- 
 Jim Rosenberg
     CIS: 71515,124                         decvax!idis! \
     WELL: jer                                   allegra! ---- pitt!amanue!jr
     BIX: jrosenberg                  uunet!cmcl2!cadre! /

dpw@lemuria.usi.com (Darryl P. Wagoner) (03/08/89)

In article <450@amanue.UUCP> jr@amanue.UUCP (Jim Rosenberg) writes:
>In article <356@flatline.UUCP> erict@flatline.UUCP (j eric townsend) writes:
>>Does your hard drive *pause* while you go from one window to another?
>>Mine does.
}
}Yes, UNIX is definitely multitasking -- but only at the *USER* level.  The
}*kernel* is not multithreaded or multitasked in any way.  A UNIX process is
}a twin-headed beast:  the process executing in user mode and the process
}executing in kernel mode.  A process executing in user mode can be preempted
}by an interrupt, but it's my understanding that a process executing in kernel
}mode *IS NOT PREEMPTED* -- it must voluntarily give up the CPU.  (Obviously

There is two forms of context switchs, voluntary and involuntary.  It
is true that a involuntary is based upon a hardward interupt but it
still causes a context switch even in kernel mode.  This in most cases
doesn't happen in kernel mode because system calls will either finish
before the time slice is up or will sleep on an address which causes a
voluntary context switch.

}impacting; (2) masking out interrupts.  Yes boys and girls, this awful thing
}is true.  The kernel has no semaphores.  The kernel has no message passing.

It does have message passing.  It couldn't work without them.  The sleeping
on a address is a example of message passing.



-- 
Darryl Wagoner		(home) dpw@lemuria.uucp or wagoner@imokay.dec.com
Digital Equipment Corp; 	OS/2, Just say No!
Boxboro, Ma  			(w) 508-264-5586
UUCP:  virgin!lemuria!dpw

jcm@mtunb.ATT.COM (was-John McMillan) (03/08/89)

In article <1211@lemuria.usi.com> dpw@lemuria.UUCP (Darryl P. Wagoner) writes:
>
>There is two forms of context switchs, voluntary and involuntary.  It
>is true that a involuntary is based upon a hardward interrupt but it
>still causes a context switch even in kernel mode.  This in most cases
>doesn't happen in kernel mode because system calls will either finish
>before the time slice is up or will sleep on an address which causes a
>voluntary context switch.

We seem to be nailing ourselves on a semantic cross.  Ouch!

A switch of PROCESSES is clearly a context switch as the MAP is changed,
	the process pointer is altered, and the USER structure is
	changed.  This is a fairly time consuming step.  (LIAR: this
	ignores the NULL context-switch wherein the process goes to
	sleep and there's no other process to run.  While at a certain
	conceptual level, context should change to an IDLE process, the 
	reality is that the 3B1's SLEEPING process is left IN-CONTEXT,
	saving the two switches [out/in] if said process is first to be
	runnable.)

A switch from user to kernel -- by trap (including system call) or
	interrupt -- is in another sense a context switch because the
	SYSTEM mode bit becomes set: certain accesses and commands are
	now permitted without inducing protection traps.  BUT, the
	process-context is preserved and fully accessible by the kernel.
	(This is "irrelevant" for interrupts, save that there exists
	a kernel stack to run upon... which really isn't such an irrelevance.)
	In other words: the KERNEL doesn't view it as a context switch
	-- but would YOU believe a kernel given the awful things it
	occasionally does to you?

A switch from kernel state (trap or interrupt) to interrupt state (or
	trap, such as page-fault) is a pretty thin excuse for using the
	term "context switch".  All that happens is some stuff is pushed
	upon the stack.  (Is a subroutine call a "context switch"?-)

Well, who cares?  I've never worried too much about the TERMS because
the actions are what must be coped with.  A "getcontext()" call only
occurs with the actual switch between two runnable processes.  (OK,
I think I could blither on with an exception HERE too -- but... yawn...
it's winter hibernation season out here.)

Hoping you're just as confused as before... ;^)

john	-- att!mtunb!jcm	-- juzz muttering, wake me up when its over

jr@amanue.UUCP (Jim Rosenberg) (03/14/89)

In article <1211@lemuria.usi.com> dpw@lemuria.UUCP (Darryl P. Wagoner) writes:
>}In article <450@amanue.UUCP> jr@amanue.UUCP (Jim Rosenberg) writes:
>}The kernel has no semaphores.  The kernel has no message passing.
>
>It does have message passing.  It couldn't work without them.  The sleeping
>on a address is a example of message passing.

Oh boy.  This could easily get into hair splitting.  You are perfectly right
to take me to task for not mentioning that the kernel does use sleep/wakeup
extensively.  It occurred to me shortly after I made my posting that I was
guilty on this point.  **However** if you believe sleep/wakeup is equivalent
to message passing then I believe you are mistaken.  See Tannenbaum for a full
discussion of this.  Briefly, the following three mutual exclusion methods are
more or less equivalent:  (1) semaphores (2) monitors (3) message passing.
Sleep/wakeup is *NOT* generally equivalent to these three.  The problem is
that a wakeup sent to a process already awake is lost.  The only real way
around this is to have a counter of pending wakeups.  There is a name for
this:  it's called a semaphore!

My main argument still stands.  It's my understanding that sleep/wakeup is
used by the kernel to manage resources which are *NOT AVAILABLE*.  E.g.  if a
block is needed in the buffer cache and none is available the process will
sleep until one is available.  That is *NOT THE SAME THING* as saying that the
buffer cache as a data structure is *PROTECTED AGAINST CORRUPTION* by
sleep/wakeup.  I believe in fact this just not the case.  I believe in fact
there is *no* general mechanism by which critical regions of code in the
kernel protect data structures against corruption that would occur from being
arbitrarily reentered except the two I mentioned:  (1) Knowledge and care that
a race condition is in fact not possible; (2) disabling interrupts.  I would
be *DELIGHTED* to be proven wrong on this point.  It is exactly because
wakeups can be lost that they are dangerous to use for mutual exclusion.
Tannenbaum gives an example of such a race condition.

Now answer me one question.  If the kernel is so hunky dory inside, just why
does a driver writer have to know about spl()??  (To know *A LOT* about spl()
in fact.)  In my opinion a driver writer should only need to know how to mask
interrupts for the device being driven.  I believe it is quite correct that
there are many many drivers which *must use spl()* to prevent corruption of
kernel data structures because the kernel provides nothing better.  I'd be
amused to see how long your kernel you run would last if all the spl()'s in
all the drivers you use were excised in favor of sleep/wakeups.  I bet you
wouldn't be amused at all.

And now a word for those who think this is all nit-picking and doesn't matter.
It does matter, it matters a lot.  The issues we're talking about are deeply
related to why driver writing is so often described as a black art.  The fact
that it's a black art costs us all money.  It means that hardware released for
the DOS market takes months or years to make it to the UNIX market -- if it
makes it at all -- because the effort of writing a UNIX driver is so much more
tricky than writing a DOS driver.  Not to mention that porting drivers is
often far more thorny than porting user code.  The lack of a sane well defined
driver interface cuts down on the variety and timeliness of hardware available
in the small-system UNIX market.  This is important, dammit!!

System V.5 will have kernel threads and all that neato stuff.  Mach has
message passing.  V.4 supposedly has a formalized driver interface:
***Hooray!!!***.  Who knows what will be in OSF/ix.

All of which means el squatto to us orphan 3b1ers.  Sigh.
-- 
 Jim Rosenberg
     CIS: 71515,124                         decvax!idis! \
     WELL: jer                                   allegra! ---- pitt!amanue!jr
     BIX: jrosenberg                  uunet!cmcl2!cadre! /

mvadh@cbnews.ATT.COM (andrew.d.hay) (03/16/89)

In article <451@amanue.UUCP> jr@amanue.UUCP (Jim Rosenberg) writes:
[]
"System V.5 will have kernel threads and all that neato stuff.  Mach has
"message passing.  V.4 supposedly has a formalized driver interface:
"***Hooray!!!***.  Who knows what will be in OSF/ix.
"
"All of which means el squatto to us orphan 3b1ers.  Sigh.

Mach is (or will be soon) PD, and much of the BSD stuff it didn't
rewrite is now also PD, so why don't we create our own Mach-based
sVr5?

-- 
Andrew Hay		+------------------------------------------------------+
Null Fu-Tze		|		LEARN HOW TO AVOID RIPOFFS!	       |
AT&T-BL Ward Hill MA	|			SEND $5...		       |
mvuxq.att.com!adh	+------------------------------------------------------+

cks@ziebmef.uucp (Chris Siebenmann) (03/19/89)

In article <451@amanue.UUCP> jr@amanue.UUCP (Jim Rosenberg) writes:
| My main argument still stands.  It's my understanding that sleep/wakeup is
| used by the kernel to manage resources which are *NOT AVAILABLE*.  E.g.  if a
| block is needed in the buffer cache and none is available the process will
| sleep until one is available.  That is *NOT THE SAME THING* as saying that the
| buffer cache as a data structure is *PROTECTED AGAINST CORRUPTION* by
| sleep/wakeup.  I believe in fact this just not the case.

 sleep()/wakeup() can be used to protect data, and some things do use
it that way (for example, locking an in-core inode -- if the inode is
already locked, you sleep() waiting for it to be unlocked so you can
lock it). Most things just use it to wait for resources to become
available.

| I believe in fact
| there is *no* general mechanism by which critical regions of code in the
| kernel protect data structures against corruption that would occur from being
| arbitrarily reentered except the two I mentioned:  (1) Knowledge and care that
| a race condition is in fact not possible; (2) disabling interrupts. 

 Certainly there doesn't seem to be one in Ultrix/BSD. Any amount of
code relies on its ability to go merrily traipsing down linked lists
of buffers, for example, without any locking at all. I've even (ahem)
written some. It's a very real worry, because you suddenly have to
figure out which kernel routines can sleep() and perhaps let someone
else in to destroy that data structure you've been carefully building.

| Now answer me one question.  If the kernel is so hunky dory inside, just why
| does a driver writer have to know about spl()??  (To know *A LOT* about spl()
| in fact.)  In my opinion a driver writer should only need to know how to mask
| interrupts for the device being driven.

 This should be sufficient as long as the driver is only manipulating
data structures 'owned' by it (either private data structures or
things like buffers it's putting information into). It's when you get
into things like multiple ethernet interfaces at different levels that
you get into trouble -- and watch out for simple locks, lest you wind
up deadlocked in a high-interrupt condition.

| I'd be amused to see how long your kernel you run would last if all
| the spl()'s in all the drivers you use were excised in favor of
| sleep/wakeups.  I bet you wouldn't be amused at all.

 It wouldn't last long at all, in fact. Remember that sleep()/wakeup()
take place in the context of a process; interrupt routines have no
process context to do a sleep() in (actually, they 'have' a process
context -- the context of whatever random process happened to be
active when the interrupt happened). This lack of interrupt context
bites NFS in BSD systems badly; the server side of NFS is a program
that forks itself N times and then immediately dives into the kernel,
never to return. It has to be a process because the NFS routines need
to both sleep() for disk IO and for incomming requests.

 If people are interested in a paper on what sort of things are
needed, I'd recommend Bach's paper on adapting the kernel for
multiprocess systems in the AT&T Bell Laboratories Technical Journal,
Vol 63 No 8 (reprinted as UNIX SYSTEM READINGS AND APPLICATIONS,
Volume II).

-- 
	"Though you may disappear, you're not forgotten here
	 And I will say to you, I will do what I can do"
Chris Siebenmann		uunet!{utgpu!moore,attcan!telly}!ziebmef!cks
cks@ziebmef.UUCP	     or	.....!utgpu!{,ontmoh!,ncrcan!brambo!}cks