kirkenda@jove.cs.pdx.edu (Steve Kirkendall) (08/17/89)
It's my turn to talk now. I promise to try and say something original. 1) Why do I want multitasking? Well, mostly because it would make desk accessories obsolete; each DA could be replaced by an application program. DAs use memory from the time the system is booted to the time it is rebooted. This is bad. You have to reboot the system to install a new DA. This is bad. DAs must be written and compiled differently than "regular" programs. This is bad. Presently we must worry about two kinds of programs ("*.ACC" and "*.PRG"). This is bad. In fact, the only good thing about DAs is that you can run one while you're in the middle of an application, and a REAL multitasking system would give that ability to EVERY program. Also, there are times when it is easier or more efficient to implement a single application as a collection of tasks. An example of this from UNIX is the 'cu' program, which is implemented as two processes: one to copy characters from the keyboard to the modem, and one to copy characters from the modem to the screen. "Client/Server" systems (such as X-windows and certain database packages) provide more examples. The Minix kernel and AmigaDos are both implemented as collections of tasks, because they are easier to maintain that way. 2) One thing I wonder about, though: what would multitasking TOS look like? I mean "look" literally -- how would several GEM programs *share* the screen? Currently, it seems that every GEM program expects to have exclusive control of the screen. Each handles its own refreshes, and has its own menu bar. How does DRI's multitasking 80x86 version of GEM handle this? 3) Concerning the necessity of an MMU: Obviously, an MMU is not *necessary*, since Minix works. Just as obviously, an MMU is *desirable* because it limits the damage from a program on the rampage (pardon the pun), makes debugging easier, eliminates the need for relocation, etc. I want an MMU, but I don't have one, and I'm not going to let that keep me from using multitasking under Minix. Even without an MMU, Minix-ST is fairly secure. It traps stack overflow and any attempt to dereference a NULL pointer. It does NOT trap pointers that overrun a buffer, but these are rare and typically occur only in the last process that I have started (and hence in the process that resides in the last portion of used memory) so no harm is done to other processes. -- Steve Kirkendall ...uunet!tektronix!psueea!jove!kirkenda or kirkenda@cs.pdx.edu
obryan@gumby.cc.wmich.edu (Mark O'Bryan) (08/19/89)
In article <1610@psueea.UUCP>, kirkenda@jove.cs.pdx.edu (Steve Kirkendall) writes: > > 1) Why do I want multitasking? Well, mostly because it would make desk > accessories obsolete; each DA could be replaced by an application program. > DAs use memory from the time the system is booted to the time it is rebooted. > This is bad. You have to reboot the system to install a new DA. This is bad. Actually, neither of these statements is true if you have MultiDesk from CodeHead Software. It allows you to load and flush accessories pretty much at will, without rebooting. It's a slick product. Your other points about DA limitations (omitted) were well made, and easy to agree with. -- Mark T. O'Bryan Internet: obryan@gumby.cc.wmich.edu Western Michigan University Kalamazoo, MI 49008
selick@bucsf.bu.edu (Steven Selick) (02/11/91)
In response to all of the talk about amigas being 'true' multitasking machines and such, I just thought I'd point out that no matter how you slice it, a single microprocessor can only handle one command at a time. Depending on the methods you choose to do your application switching, you can buffer i/o and such to maximize your productivity, but you still are performing one function on the processor at any given clock tick. Although I am relatively upset at the low priority given to midi and such musical applications on the net, I must agree with the amiga user that one of the main features of the st is its built in midi port and relative low cost, making it ideal for the struggling musician (like myself) but I must say that I have USED the 'big' sequencers on the mac, and I have toyed with amiga/ibm sequencers in the store, and none has come close to Notator on the ST. The ST is a wonderful machine for the price, and I was happy enough with my 1040st to upgrade to an STE (mainly for the blitter chip, tos 1.6, and memory upgrade, but what else is there?) I also just want to say that on the mac (I am running spectre by the way) I feel considerably more closed in. This is not a concrete term, but on the old apple ][, I felt at home. On the ST, I feel at home. On the mac, I feel very restricted. I am not saying that the st is better...I don't think anybody needs to make that distinction. I prefer it. As for multitasking, I hope atari does a system somewhat like multifinder on Mac because I love it. see ya! steve <selick@bucsf.bu.edu>
lsmichae@immd4.informatik.uni-erlangen.de (Lars Michael ) (02/11/91)
selick@bucsf.bu.edu (Steven Selick) writes: >In response to all of the talk about amigas being 'true' multitasking >machines and such, I just thought I'd point out that no matter how you >slice it, a single microprocessor can only handle one command at a time. >Depending on the methods you choose to do your application switching, >you can buffer i/o and such to maximize your productivity, but you still >are performing one function on the processor at any given clock tick. I really agree with you, *but* a multitasking system slices one second so all non-blocked processes get one or more slices of time. This is transparent for the user, so he may think he has *all* his processes running. Actually multitasking can help you e.g. in programming. Lets compare: Monotasking: You write some source with the editor, leave the editor, compile, debug, find an error and start the editor again. Each application has to be left before the next can run. Multitasking: Load all applications before use, and switch (without loading) only to the one you wanna use. Indeed this is faster ! >Although I am relatively upset at the low priority given to midi and >such musical applications on the net, I must agree with the amiga user >that one of the main features of the st is its built in midi port and >relative low cost, making it ideal for the struggling musician (like >myself) but I must say that I have USED the 'big' sequencers on the >mac, and I have toyed with amiga/ibm sequencers in the store, and none >has come close to Notator on the ST. The ST is a wonderful machine for >the price, and I was happy enough with my 1040st to upgrade to an STE >(mainly for the blitter chip, tos 1.6, and memory upgrade, but what else >is there?) What else? The STE has: hardware scrolling horizontal and vertical four Joystick ports (plus two internal ports ?) stereo digital sound and some features which I don't know. Lars PS.: I'm an enthusiastic ST-User. +-----------------------------------+----------------------------------+ | Lars Michael | | | | | | lsmichae@faui43.uni-erlangen.de | | | | "Down with ATARI, | +-----------------------------------+ / | \ Long live the ST !" | | "May the Schwartz be with you!" | / | \ | +-----------------------------------+----------------------------------+
plinio@boole.seas.ucla.edu (Plinio Barbeito/) (02/14/91)
In article <1991Feb11.151210.4010@informatik.uni-erlangen.de> lsmichae@immd4.informatik.uni-erlangen.de (Lars Michael ) writes: >selick@bucsf.bu.edu (Steven Selick) writes: > >>Depending on the methods you choose to do your application switching, >>you can buffer i/o and such to maximize your productivity, but you still >>are performing one function on the processor at any given clock tick. > >I really agree with you, *but* a multitasking system slices one second >so all non-blocked processes get one or more slices of time. This is >transparent for the user, so he may think he has *all* his processes >running. Provided the time for a task switch is small enough. If task switches were done every 10 seconds (and there's no law saying you can't), the user would notice. Make task switches every microsecond and again the user notices something: a darn slow machine (due to overhead -- I'll explain below). > >Actually multitasking can help you e.g. in programming. Lets compare: OK, let's compare. I'll try to balance the argument with the disadvantages. > >Monotasking: > You write some source with the editor, leave the editor, compile, > debug, find an error and start the editor again. Each application > has to be left before the next can run. Plus, each application runs at the full speed of the processor without being interrupted every now and then so that the scheduler can start up to "direct traffic". Thus, in its simplicity, a single-tasking system avoids the typical 10-40% overhead on the processor due to the kernel having to do its housekeeping (e.g. basically "asking" each process "Do you want to run now?"). An 8MHz single-tasking machine running a given program would appear to be a 7.2 MHz machine if it had multitasking overhead of only 10%. Since a lot of the time a single-user machine like the ST only needs to be running one program, there'd better be a good reason to penalize the user with a significant amount of overhead all the time. Once two processes start running at the same time, each process will seem slower because they are sharing the processor time with each other and the scheduler; each process will then be running at 50% speed (minus the speed of the overhead). If one of them stops to wait for the disk, say, then the other usually gains back the entire processor minus the overhead (minus any slowdown due to the disk transfer and the processor trying to use memory at the same time). Run too many other processes, and the machine may become unusably slow. The user might be frustrated to press a key and not get the expected response (yet). He might start to think that the machine is hung. To get around this, some schedulers pre-empt the processor so that that the user is still able to type, use the mouse...at nearly the speed he is accustomed (at the expense of other processes). But this adds even more code to the scheduler which you are trying to keep small because of overhead. In addition, if there is neither multitasking nor task switching, each application has the entire memory space to itself, so it will not crash due to some other processes having tweaked its memory space (sometimes by accident, sometimes by ignorance...sometimes on purpose ];-D ). > >Multitasking: > Load all applications before use, and switch (without loading) > only to the one you wanna use. Indeed this is faster ! > What you've described is only task-switching. When you type control-Z from Gulam's built-in 'ue' editor to go back to the shell, or type fg to go back in, you are task-switching. You are also task switching if you are in Gulam's terminal emulator and press the Undo key to go back to the shell screen. Indeed this is fast. But Multitasking is having many processes loaded AND (potentially) running at the same time. It means that two processes that don't know what each other does can possibly be using the other's memory without its permission. I don't mean to imply that because a machine is multitasking it will always be crashing. In fact, I prefer being able to start it up, if only because I like to keep editing while I'm compiling a large program. But on a machine with no MMU, it is necessarily risky. Say you had ten processes running at the same time. The risk is ten times higher that some of the code in RAM will have a bug in it or will misbehave. You have to be doubly careful if you are doing serious work, and at the same time you are starting up a new program that you haven't used before. So you save your files and wait for the save to finish before you try the experimental stuff. To be even more sure that the new program hasn't smashed a part of the data in the other programs, you quit the other programs. You don't want part of the program you're writing to have a bunch of funny characters in it all of a sudden. And to be safe from having the operating system's tables (which it has to have in RAM to multitask) tweaked by the program, providing wierd behaviour later, you quit the operating system. Yes, you can do this with MiNT (and possibly Beckemeyer's MT C-shell) on the ST, by exiting the top-level shell. >>the price, and I was happy enough with my 1040st to upgrade to an STE >>(mainly for the blitter chip, tos 1.6, and memory upgrade, but what else >>is there?) > hardware scrolling horizontal and vertical > four Joystick ports (plus two internal ports ?) > stereo digital sound > and some features which I don't know. there is also a 4096 color palette, and I think it has RCA jacks (ports?) for the stereo audio, and the ability to add more memory with SIMM's. plini b -- ----- ---- --- -- ------ ---- --- -- - - - plinio@seas.ucla.edu Putting the Ctrl key under shift; replacing it with CAPSLOCK, is like putting the steering wheel in the trunk, and trying to drive with the spare tire.
david@doe.utoronto.ca (David Megginson) (02/14/91)
In article <1976@lee.SEAS.UCLA.EDU> plinio@boole.seas.ucla.edu (Plinio Barbeito/) writes: >Provided the time for a task switch is small enough. If task switches >were done every 10 seconds (and there's no law saying you can't), the >user would notice. Make task switches every microsecond and again >being interrupted every now and then so that the scheduler can start >up to "direct traffic". Thus, in its simplicity, a single-tasking >system avoids the typical 10-40% overhead on the processor due to the >kernel having to do its housekeeping (e.g. basically "asking" each >process "Do you want to run now?"). An 8MHz single-tasking machine >running a given program would appear to be a 7.2 MHz machine if it had >multitasking overhead of only 10%. > >Since a lot of the time a single-user machine like the ST only needs >to be running one program, there'd better be a good reason to penalize >the user with a significant amount of overhead all the time. It's not as bad as you make it out to be. First of all, all of the interrupts (mouse handler, etc.) in the ST can slow down any program as much as or more than a multi-tasking scheduler. The ideal way to multi-task is to have the foreground program at full priority, and the background program(s) at low priority. ie. if you are writing something in an editor, the editor does not wasted CPU time while it is waiting for a keystroke, and the compilation process in the background can use the full CPU. As soon as you hit a key, the editor takes over most or all of the CPU, and the compilation process bides its time until the editor is waiting for another keystroke. I'm a fast typist, and I don't find microemacs sluggish, even when virmf is making a new TeX font in the background! If you have a slow hard disk, however, any disk-bound process can slow down all the tasks. However, when I am running only a single task under MT C-Shell or MiNT, I cannot notice any speed difference between that and TOS. David -- //////////////////////////////////////////////////////////////////////// / David Megginson david@doe.utoronto.ca / / Centre for Medieval Studies meggin@vm.epas.utoronto.ca / ////////////////////////////////////////////////////////////////////////
CXW148@psuvm.psu.edu (02/15/91)
Could someone please Email me a description of hardware multitasking (boards, etc.) that allow true multitasking on the Atari ST? I've heard about the software, but multitasking under software is not multitasking. A multiprocessor board is neccessary for true multitasking. Chris Winward userid CXW148 on psuvm.psu.edu Disclaimer: This note does not exist, therefore you are not reading it now.
plinio@turing.seas.ucla.edu (Plinio Barbeito/) (02/15/91)
In article <1991Feb14.133758.3687@doe.utoronto.ca> david@doe.utoronto.ca (David [...] >>system avoids the typical 10-40% overhead on the processor due to the >>kernel having to do its housekeeping (e.g. basically "asking" each >>process "Do you want to run now?"). An 8MHz single-tasking machine >>running a given program would appear to be a 7.2 MHz machine if it had >>multitasking overhead of only 10%. >> >>Since a lot of the time a single-user machine like the ST only needs >>to be running one program, there'd better be a good reason to penalize >>the user with a significant amount of overhead all the time. > >It's not as bad as you make it out to be. First of all, all of the True, as long as a machine is still usable for what you need it for, and doesn't frustrate you because you know it could be done faster, who cares how much CPU time is being used up? I was trying to balance the argument with the little talked about downsides of multitasking -- maybe I overdid it. There are lots of cases in which multitasking will save you time. I decided to point out some in which it doesn't, and found quite a few in the process. Again, I actually prefer to use multitasking, if the slowdown (and other things) aren't so bad as to outweigh the benefits. Mainly, I wanted to debunk any notions that may have existed that multitasking gets you something for nothing; that you could have 10 busy, scrolling screens running side by side in different windows, and that they would all be chugging along at the full-speed that you're used to on your single-tasking machine. >interrupts (mouse handler, etc.) in the ST can slow down any program >as much as or more than a multi-tasking scheduler. The ideal way to The way the mouse was done on the ST happens to be one of the things that I liked most about the original O/S. Machines 10 times more expensive exist that cannot approach the quality of it (the clicking is another story). Yes, even the fast, 12.5 MIPS Sparcs running Unix/X-windows have a 'mushy' feel to the mouse tracking, and a skipping mouse cursor with a painted-on appearance, by comparison. >multi-task is to have the foreground program at full priority, and the >background program(s) at low priority. ie. if you are writing something >in an editor, the editor does not wasted CPU time while it is waiting >for a keystroke, and the compilation process in the background can use >all of the CPU, and the compilation process bides its time until the Not *all* of the CPU. Even though you may not notice it, part of the CPU time has to be going to the taxman, simply because there's a timer interrupting the processor constantly, waking up the scheduler many times a second to check if it is time for another process to run. And this is the *efficient* way to do it. >don't find microemacs sluggish, even when virmf is making a new TeX >font in the background! If you have a slow hard disk, however, any That's how it should be. But a lot of implementations out there are processor (and memory) hogs, and do make you wait for a keystroke. As far as MiNT is concerned, know that you have been spoiled. This product has achieved Unix source near-compatibility, while at the same time imposing what is in my opinion a respectably very low CPU overhead, and a low memory overhead (by comparison) at the same time. Other implementations out there for other machines don't shine so brightly. (I won't name names in the interest of not throwing spoilt apples at other companies, but...) Have you ever used a presumably fast machine that used cooperative multitasking? The 40% overhead figure that I mentioned above would not seem unrealistic to you after using one. So as you can see, it could be worse. It could be a LOT worse. >disk-bound process can slow down all the tasks. However, when I am How's that? With DMA, you'd think a slow hard disk and a fast one would burden you about the same (not much). >running only a single task under MT C-Shell or MiNT, I cannot notice >any speed difference between that and TOS. Could it be possible that MiNT implements a rewrite and speedup of the Bconxxx O/S calls, and that's why typing and printing text on the screen does not seem that slow? Is it just me, or is scrolling actually FASTER under MiNT? plini b -- ----- ---- --- -- ------ ---- --- -- - - - plinio@seas.ucla.edu Putting the Ctrl key under shift; replacing it with CAPSLOCK, is like putting the steering wheel in the trunk, and trying to drive with the spare tire.
mjs@hpfcso.FC.HP.COM (Marc Sabatella) (02/16/91)
> software, but multitasking under software is not multitasking. A > multiprocessor board is neccessary for true multitasking. Not by any reasonable definition of multitasking I've ever seen. Amigas multitask, Unix multitasks, virtually every other "real" operating system out there (depending on how you label MS-DOS) multitasks without special hardware. Well, maybe an MMU, but no multiprocessors are required. Read an OS text to learn the difference between "multitasking" and "multiprocessors". My bried summary: "multitasking" is achieving the illusion of doing more than one thing at a time, to give you a convenient user interface. But the total CPU time required to execute two processes is the same sequential or parallel. A "multiprocessor" is something that will execute >1 instruction at once, where those instructons may all come from the same process - ie, one need not multitask to take advantage of a multiprocessor. The sole purpose of a multiprocessor is to make things run faster; it is entirely orthogonal to the issue of multitasking.
vsnyder@jato.jpl.nasa.gov (Van Snyder) (02/16/91)
From 1968 to 1984, I used an Univac 1108, which had 262144 words of 36 bits each of memory (about 1 MB). The operating system used about 32k of this for code, and another 32k for data (heap), leaving 1bout 196k words for users. We ran with 50 interactive users, 10 active batch (background) jobs, and queues of up to 50 or so background jobs. Plus input and output spooling. The machine had 11 4ms drums, each with 262144 words, for swapping. It was expensive, but gave better response than a 6Mhz AT (although the user interface sucked). The key to the performance was that Univac knew how to build a machine that could handle interrupts quickly (2 cycles, because there was an entire duplicate set of registers for the OS), and, more importantly, the operating system KNEW HOW TO USE INTERRUPTS. If you implement multitasking by ticking the clock every k (say 10) milliseconds, and going around asking "are you ready to run", your performance will be more like Unix (30% penalty at least). The switching algorithm used in 1100 OS was what OS theorists call "inverse of remainder of quantum:" Each process has a priority and a "time quantum". A process is NOT interrupted by the clock until its time quantum has expired. Other interrupts, e.g. I/O completion, external event (keystroke or mouse movement), etc may make a higher priority process eligible for execution. If a process willingly relinquishes control, e.g. to wait for disk or console I/O, before using 1/2 of a quantum, its quantum is halved, and its priority increased. If a process willingly relinquishes control after using more than half but less than all of its quantum, neither quantum nor priority change. If a process loses control by its quantum expiring, the quantum is doubled and the priority decreased. By this scheme, the OS overhead for a compute-only task rapidly approaches zero. If your processor and OS both know how to handle interrupts, and the task dispatcher uses decent data structures for keeping track of task priorities, and you don't piss away 1/3 of the cycles fooling with the clock unnecessarily, the performance of a multitasking OS supporting one interactive process should also approach zero. Another thing the 1100 OS knew how to do EXCEEDINGLY WELL was disk and tape I/O. There were five basic I/O interfaces: overlapped IO, synchronized later by explicit WAIT request; I/O with the OS causing you to wait to get control until the I/O completed (but somebody else got to use the processor); over- lapped I/O with pseudo-interrupt at termination; I/O with wait and pseudo- interrupt at completion; and I/O with task termination and pseudo-interrupt at the end. The last has pretty much the same effect as the second, but with less overhead. One might argue that I/O with overlap causes the machine to slow down because of contention between the channels and processors, but if the BIOS sits there and says "hey, are you done? hey, are you done? hey, are you done? ..." a million or so times per I/O, the effect is to have much worse performance than if even half of the processor's cycles get stolen by the channel. On the ST, unfortunately, there's only one DMA for both the hard and floppy disks, so the most interesting overlapped I/O opportunities can't be realized. But I/O to the disk and printer, for example, could proceed concurrently with computing IF ONLY THE OS WERE CLEVER ENOUGH. Even if TOS/GEMDOS/BIOS/XBIOS never learns how to multi-task, I'd like to have overlapped I/O to the floppy disks, ACSI, printer, RS-232 and MIDI at least. At the BIOS level, I'd like to give a SCSI block (yes SCSI, not ACSI), and a memory address if necessary. This would make it trivial to write support for tape, ethernet, toasters, etc. on the ACSI channel. I'd be happy to have only "start I/O and return immediately with I/O proceeding concurrently" and "wait for I/O to complete" interfaces to the BIOS. At the GEMDOS (file I/O) level, I'd like to add both of the above to the present "start I/O and don't continue until it's done" interface. Of course, I'd like the same for the rest of the devices. It would also be nice to be able to use interrupts, but that may require something more like a multitasking kernel. Enough of a tirade for now. Maybe more later. In general, I don't think TOS/GEMDOS/BIOS/XBIOS preclude intelligent I/O, or even multitasking. One doesn't need all the clanking machinery of Unix. [Aside: The hackers who invented Unix were rebelling at the size of Multics. Unix is now about 5 times the size of Multics, and doesn't do more than a few extra things (mostly networking stuff).] Van -- vsnyder@jato.Jpl.Nasa.Gov ames!elroy!jato!vsnyder vsnyder@jato.uucp
vsnyder@jato.jpl.nasa.gov (Van Snyder) (02/16/91)
In article <1986@lee.SEAS.UCLA.EDU> plinio@turing.seas.ucla.edu (Plinio Barbeito/) writes: >Not *all* of the CPU. Even though you may not notice it, part of >the CPU time has to be going to the taxman, simply because there's a >timer interrupting the processor constantly, waking up the scheduler >many times a second to check if it is time for another process to run. >And this is the *efficient* way to do it. It's most definitely NOT the efficient way to do it, if you have an adequate clock. The *efficient* way to do it is to give the process a time quantum, set the clock to interrupt WHEN THAT QUANTUM HAS EXPIRED, and DON'T USE CLOCK INTERRUPTS FOR ANYTHING ELSE. What purpose is served by getting clock interrupts every 10ms, only to discover that what you were doing is what you wish to continue doing? 'Way back in '72, Madnick and Donovan described this notion in their text "Operating Systems". Petersen and Silberschatz did a good job too. I've not taught an operating systems class in over 6 years, but I've been told that Tanenbaums book is excellent. Better yet, get hold of Univac 1100 OS listings -- Univac hasn't gone OCO yet, so most any Unisys 1100 site has them. Sorry to keep harping on this. I just can't understand why nowadays one requires 196k to do 10% of what could be done in 90k 25 years ago. Van. -- vsnyder@jato.Jpl.Nasa.Gov ames!elroy!jato!vsnyder vsnyder@jato.uucp
miskinis@aisg.enet.dec.com (John Miskinis) (02/17/91)
Yes, the last reply regarding multitasking is quite correct... The standard (GEM) Atari desktop has a limited multitasking shell built into it. This can be demonstrated easily by invoking a desk accessory while windows are being re-painted (like when exiting an application). The desk accessory will become the current "process", and the window that was in the middle of being repainted will freeze. As soon as you do something in the desk accessory (click a button, etc.) and the desk accessory continues running, the window(s) will finish repainting, and then control returns back to the desk accessory... Anyone who's a GURU, and is familiar with Atari ST MIDI and/or MFPINT, please check out my MIDI postings in comp.sys.atari.st.tech... I'm trying to solve a problem with MIDI input, by replacing the level 6 interrupt handler. I'm having a problem getting a pointer to the original one, so I can restore it upon application terminantion... (HELP!) _John_
bill@mwca.UUCP (Bill Sheppard) (02/19/91)
[discussion of various multitasking systems deleted] >Other implementations out there for other machines don't shine so >brightly. (I won't name names in the interest of not throwing spoilt >apples at other companies, but...) Have you ever used a presumably >fast machine that used cooperative multitasking? The 40% overhead figure >that I mentioned above would not seem unrealistic to you after using one. A consultant doing a study for a customer of ours did some research into the amount of overhead our operating system (OS-9) required; his findings were that on a 20 MHz 68020 the the OS required less than 0.5% of the CPU's processing bandwidth, using a 10 ms clock tick, 2 ticks/slice (which equals 50 time slices/second). This overhead consisted primarily of the scheduler aging processes and saving context/switching process where appropriate. This was not on an ST, so OS tasks such as mouse handling weren't needed. A 68030 should require substantially less overhead, since as the processor power goes up the relative amount of time needed to do housekeeping goes down. Also, the on-board cache of a 68030 should make a substantial difference. Of course, the same holds true in the other direction - a 68000 would require substantially more overhead as a percentage of total processing bandwidth. Also, OS-9 is a real-time operating system, and so is tuned to require as little overhead as possible. More general-purpose OS' such as TOS (were it multi-tasking), Amiga OS, Mac System 7.0, and Unix generally would require significantly more overhead (a factor of 10 wouldn't be an unreasonable assumption, at least for Unix). -- ############################################################################## # Bill Sheppard -- bills@microware.com -- {uunet,sun}!mcrware!mwca!bill # # Microware Systems Corporation --- OS-9: Seven generations beyond OS/2!! # ######Opinions expressed are my own, though you'd be wise to adopt them!######
7103_2622@uwovax.uwo.ca (Eric Smith) (02/19/91)
In article <1986@lee.SEAS.UCLA.EDU>, plinio@turing.seas.ucla.edu (Plinio Barbeito/) writes: > As far as MiNT is concerned, know that you have been spoiled. > This product has achieved Unix source near-compatibility, while at the > same time imposing what is in my opinion a respectably very low CPU > overhead, and a low memory overhead (by comparison) at the same > time. > Thanks for the kind words. Low overhead was definitely one of my goals for MiNT, and I think it turned out not too bad (I get about 1 or 2% fewer dhrystones under MiNT than under TOS, which isn't noticeable to the user). > Could it be possible that MiNT implements a rewrite and speedup of > the Bconxxx O/S calls, and that's why typing and printing text > on the screen does not seem that slow? Is it just me, or is scrolling > actually FASTER under MiNT? > No, not yet. In fact, the overhead for system calls under MiNT is quite a bit higher than under TOS, so Bconxxx calls are a lot slower. However, it's possible that the Cconws() and Fwrite() may be faster, so it probably balances out. -- Eric R. Smith email: Dept. of Mathematics eric.smith@uwo.ca University of Western Ontario 7103_2622@uwovax.bitnet
torrance@elaine24.stanford.edu (Mark Torrance) (02/21/91)
I have noticed problems with crashing when running multiple programs under MiNT. I wondered whether anyone has compiled a good list of which programs work and which don't under MiNT. Mark Torrance torrance@cs.stanford.edu Stanford Computer Science Department -- Mark C. Torrance torrance@next.stanford.edu 415-327-2159 Undergraduate, Symbolic Systems, Stanford University AI, Connectionist Music Composition, Atari ST
david@doe.utoronto.ca (David Megginson) (02/22/91)
In <torrance.667114685@elaine24.stanford.edu>, Mark Torrance writes: > I have noticed problems with crashing when running multiple programs > under MiNT. I wondered whether anyone has compiled a good list of > which programs work and which don't under MiNT. > As far as I know, the main reason that programs crash under MiNT is that they are buggy. If a program is running alone, there is often a lot of extra memory in the computer, so if a program writes to the wrong memory accidentally, it does no damage and the bug goes undetected. When you are running a lot of programs together, there is not usually as much empty memory, so a buggy program is more likely to do something REALLY nasty, like overwriting someone else's .text segment (sure death). This is a problem with the M68000 chip (which doesn't have memory protection) and with the programs involved, but in no way reflects badly on MiNT. You might find the same problem if you run a lot of auto-folder programs and desk accessories, then run a program in the remaining memory. David -- //////////////////////////////////////////////////////////////////////// / David Megginson david@doe.utoronto.ca / / Centre for Medieval Studies meggin@vm.epas.utoronto.ca / ////////////////////////////////////////////////////////////////////////
jclark@sdcc6.ucsd.edu (John Clark) (02/26/91)
In article <1991Feb16.004202.26343@jato.jpl.nasa.gov> vsnyder@jato.Jpl.Nasa.Gov (Van Snyder) writes:
+sucked). The key to the performance was that Univac knew how to build a
+machine that could handle interrupts quickly (2 cycles, because there was
+an entire duplicate set of registers for the OS), and, more importantly,
+the operating system KNEW HOW TO USE INTERRUPTS. If you implement multitasking
The Texas Instruments' 9900 microprocessor family had a Workspace Pointer.
The 'register set' r0-r15 was pointed to by this pointer in RAM. The
context switch was get new PC and LOAD WP. Unfortunately TI in their
wonderful fumble of the microprocessor line, didn't use their
corporate resources to dominate the market. The rest is history.
--
John Clark
jclark@ucsd.edu
vsnyder@jato.jpl.nasa.gov (Van Snyder) (02/27/91)
In article <16998@sdcc6.ucsd.edu> jclark@sdcc6.ucsd.edu (John Clark) writes: >In article <1991Feb16.004202.26343@jato.jpl.nasa.gov> vsnyder@jato.Jpl.Nasa.Gov (Van Snyder) writes: ... Clark replies > >The Texas Instruments' 9900 microprocessor family had a Workspace Pointer. >The 'register set' r0-r15 was pointed to by this pointer in RAM. The >context switch was get new PC and LOAD WP. Unfortunately TI in their >wonderful fumble of the microprocessor line, didn't use their >corporate resources to dominate the market. The rest is history. How many other folks have noticed that good system /= good marketing? -- vsnyder@jato.Jpl.Nasa.Gov ames!elroy!jato!vsnyder vsnyder@jato.uucp