cory@three.MV.COM (Cory Kempf) (06/06/90)
jesup@cbmvax.commodore.com (Randell Jesup) writes: >In article <54992@microsoft.UUCP> edwardj@microsoft.UUCP (Edward JUNG) writes: >>The thing that pre-emptive multitasking does is give guarantees against poor >>code. > Note that in this case, "poor code" can mean anything that runs for >a signifigant period without explicitly giving up the processor. This includes >most "standard" C programs which use appreciable processor time, such as >ray tracers. A properly written user oriented program would check for events frequently, even in the middle of a heavy duty CPU burst. That is just good user oriented development though. Remember: The USER is in control. > It also means that background tasks doing IO may not work well >or at all if the frontmost application doesn't give up the cpu VERY often, >since various buffers may overflow, etc. See above. At a minimum, each application should get some CPU time each second. If your application is processing in bursts that are too large, it is broken -- it needs to be fixed. One side note: Mac System 7 gives the user the ability to interupt and terminate any job that is taking up too much CPU time (at least that is what the docs say... I have yet to see it). +C -- Cory Kempf I do speak for the company (sometimes). Three Letter Company 603 883 2474 email: cory@three.mv.com, harvard!zinn!three!cory
martin@cbmvax.commodore.com (Martin Hunt) (06/07/90)
>A properly written user oriented program would check for events frequently, >even in the middle of a heavy duty CPU burst. That is just good user >oriented development though. Remember: The USER is in control. That may be your definition of proper software. but it sure isn't mine! Do you really suggest that in the innermost loop of a ray tracer there should be a call to a function that checks for any higher priority tasks that need the CPU? Maybe I should just include such a call every 10th line or so. > >See above. At a minimum, each application should get some CPU time each >second. If your application is processing in bursts that are too large, >it is broken -- it needs to be fixed. It's the OS that is broken, not the application. With a lot of effort from the programmers and a good choice of applications, a cooperative multitasking system can look as good as preemptive, but it's really just a good example of applications doing work that needs to be done by the OS. >Cory Kempf I do speak for the company (sometimes). -- Martin Hunt martin@cbmvax.commodore.com Commodore-Amiga Engineering {uunet|pyramid|rutgers}!cbmvax!martin
zs01+@andrew.cmu.edu (Zalman Stern) (06/07/90)
[This may have little to do with comp.arch. My apologies if you are offended.] cory@three.MV.COM (Cory Kempf) writes: > A properly written user oriented program would check for events frequently, > even in the middle of a heavy duty CPU burst. That is just good user > oriented development though. Remember: The USER is in control. > [...] > See above. At a minimum, each application should get some CPU time each > second. If your application is processing in bursts that are too large, > it is broken -- it needs to be fixed. Using a premeptive lightweight threads and an abort/exception signaling mechanism one can cleanly structure an application so that its computational parts do not have to explicitly test for user interaction. That is, the user intercation can be handled in one thread and the computation can be handled in another. A signal style communication mechanism (preferably with language level support) can be used to synchronize the two threads when necessary. This results in cleaner code but explicit must be used to ensure mutual exclusion. Also, distributed systems (or maybe just "large complex systems") make it much harder to maintain crisp user interaction in a non-preemptive environment. For example, on the system I work on, open almost always happens very quickly but, occasionally, it can take up to two minutes (timing out a fileserver). How are you going to deal with that in a non-preemptive environment? Sincerely, Zalman Stern | Internet: zs01+@andrew.cmu.edu | Usenet: I'm soooo confused... Information Technology Center, Carnegie Mellon, Pittsburgh, PA 15213-3890 *** Friends don't let friends program in C++ ***
peter@ficc.ferranti.com (Peter da Silva) (06/07/90)
In article <355@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes: > A properly written user oriented program would check for events frequently, ^^^^^^^^^^^^^ > even in the middle of a heavy duty CPU burst. That is just good user > oriented development though. Remember: The USER is in control. On the Mac, all programs are basically editors. This is not always the best way to design a program. For example, the aforementioned ray-tracer. To put it another way, what about non-user-oriented programs? Even programs with an extensive user-interface may contain some computationally intensive code. Spreadsheets, for example. Haicalc, on the Amiga, is implemented as a compute engine and a set of user-interface tasks: each window actually being managed by a separate process. When you change a cell, this is communicated to the compute engine which takes appropriate action. The user process continues to run. Any changes are broadcast to each user-interface task concerned. The UI tasks are user-driven. The compute task is written for speed. You just can't build an application like this on the Mac. -- `-_-' Peter da Silva. +1 713 274 5180. <peter@ficc.ferranti.com> 'U` Have you hugged your wolf today? <peter@sugar.hackercorp.com> @FIN Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.
seanf@sco.COM (Sean Fagan) (06/09/90)
In article <355@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes: >A properly written user oriented program would check for events frequently, >even in the middle of a heavy duty CPU burst. That is just good user >oriented development though. Remember: The USER is in control. And that is why the MacOS is not a "true" OS. Because the *USER* (actually, the application) is in control, not the OS. -- -----------------+ Sean Eric Fagan | "It's a pity the universe doesn't use [a] segmented seanf@sco.COM | architecture with a protected mode." uunet!sco!seanf | -- Rich Cook, _Wizard's Bane_ (408) 458-1422 | Any opinions expressed are my own, not my employers'.
ac08@vaxb.acs.unt.edu (ac08@vaxb.acs.unt.edu (C. Irby)) (06/09/90)
In article <6570@scolex.sco.COM>, seanf@sco.COM (Sean Fagan) writes: > In article <355@three.MV.COM> cory@three.MV.COM (Cory Kempf) writes: >>A properly written user oriented program would check for events frequently, >>even in the middle of a heavy duty CPU burst. That is just good user >>oriented development though. Remember: The USER is in control. > > And that is why the MacOS is not a "true" OS. Because the *USER* (actually, > the application) is in control, not the OS. > > Ohmigod!!! Anything but THAT!!! Once you let them user-types get in control, ANYTHING could happen!!! ;) C Irby
gillies@m.cs.uiuc.edu (06/11/90)
> What an amazingly half-baked idea! Oh, sorry. . .I just insulted the bakers > of the world. What you're talking about is placing an absolutely impossible > task in front of the compiler writers (who certainly have enough to worry > about already) in hopes of making life just a *tiny* bit easier for OS and > application developers. There is NO WAY that the compiler can have any but > the foggiest idea concerning which points are "appropriate" for a possible > context switch. The application designer will certainly have a much better > idea, or the OS can allow the user to choose, but the compiler is absolutely > the worst place to attempt a solution. This makes me chuckle, since people have been doing it for 25 years without knowing "it's impossible" or "half baked". The threaded TUTOR interpreter that runs on PLATO (CDC Cybers) inserts checks for context switches ("autobreaks") before every backwards branch. It works quite well, too, supporting hundreds of users with high efficiency, and load balancing the system so nobody claims more than 10,000 instructions per second. As for those who complain about the slowdown of tight loops --- well, tight loops can be unrolled, or countdown timers can be used to reduce the frequency of checking to give up the processor. I think in 5 or 10 years when everyone wants their desktop workstation to outperform a Cray 5, then this strategy may come back into vogue. Like everything else, it costs some speed to build a CPU that can handle interrupts, and Seymour Cray is well aware of this. Don Gillies, Dept. of Computer Science, University of Illinois 1304 W. Springfield, Urbana, Ill 61801 ARPA: gillies@cs.uiuc.edu UUCP: {uunet,harvard}!uiucdcs!gillies
dfields@neutrino.urbana.mcd.mot.com (David Fields) (06/15/90)
In article <4242@darkstar.ucsc.edu>, golding@saturn.ucsc.edu (Richard A. Golding) writes: |>In article <jdarcy.644899889@zelig> jdarcy@encore.com (Mostly Useless) writes: |>>mattly@aldur.sgi.com (James Mattly): |>>> <James asks us to consider compiler support for adding premption points> |>> |>> <jdarcy suggest that the compiler would have a difficult time finding |>> appropriate places for preemption with out significant help from the |>> application designer> |> |>In fact some recent research has shown just the opposite: that |>compile- time assistance is a very *good* thing for operating system |>design. The Emerald system (University of Washington) gets a lot of |>compiler assistance, and gets significant speedup as a result. More to |>the point of this newsgroup, the SOAR (Smalltalk on a Risc, UC |>Berkeley) processor makes assumptions about code behaviour to allow a |>simpler interrupt and context-switching mechanism. By only performing |>context switches at method invocations, things got easier (it's been a |>couple years since I read Ungar's dissertation so the details are a bit |>hazy.) |> |>So I think it's rather hasty to say that compiler assists like this |>are unreasonable... people are actually doing such things. |> |>-richard I haven't read to much about Emerald. Does it have support for adding preemption to applications? Or were you just generalizing jdarcy's suggestion that compiler support for application preemption would be in appropriate to mean that ALL compiler assistance to the OS would be evil. Dave Fields // Motorola MCD // uiucuxc!udc!dfields // dfields@urbana.mcd.mot.com