[comp.os.minix] minix on the macintosh

craig_dewick%713.602@fidogate.fido.oz (Craig Dewick) (09/05/89)

Original to: cy@dbase.uucp
It would appear, as you have rightly stated, that there is no support for
Minix on the mac at all. I have not seen anything to suggest the contrary at
this time.
 
It is rather dissapointing, since the Mac is a fairly good platform to try a
Minix port on. Admittedly the interface to the Mac system software could be
a bit of a pain, but otherwise it SHOULDN'T be too difficult. I don't
program Mac's, so I can't really say any more.
 
C ya later.... Craig.
--- Zeta
 * Origin: Zeta: Unix, Minix, Xenix support (02) 627-4177 (3:713/602)

ast@cs.vu.nl (Andy Tanenbaum) (09/07/89)

In article <14645@nswitgould.cs.uts.oz> craig_dewick%713.602@fidogate.fido.oz (Craig Dewick) writes:
>Original to: cy@dbase.uucp
>It would appear, as you have rightly stated, that there is no support for
>Minix on the mac at all. I have not seen anything to suggest the contrary at
>this time.

Not true any more!  I have personally tested a version of MINIX for the
Macintosh written by Joe Pickert.  It is not quite ready, but seems pretty
good.  I am sure he will post more when the time comes.

Andy Tanenbaum

jca@pnet01.cts.com (John C. Archambeau) (09/08/89)

Minix on a Mac would be a very lousy platform.  Reason being that Macs don't
have a notion what DMA is.  Run Minix on a Mac Plus and I guarrantee you it
will perform worse than a 4.77 XT with a Western Digital controller w/DMA. 

Without DMA support in hardware, porting Minix to a Mac would be pointless. 
But one thing that does come to mind.  If the Radius Accelerators support DMA
with hard drive I/O, then the platform to a Mac with a Radius Accelerator
would be worthwhile.

I have heard from very knowledgeable people who know that Mac well that there
is no DMA support unless you go to one of the '030 based machines and put in a
card with the DMA controller.  I suspect that my sources are correct because
of my observations of how our TOPS network at my place of employment performs.

If the accelerators out there support DMA fully, then the Minix port would be
worthwhile, but then again...the Mac wasn't designed to run a time-sharing OS
or to 'multitask' to begin with.  If you want a Mac like environment with
Unix, then you go out and buy a Sun workstation, you stay away from the Mac. 
Even the Mac II's with the '030 in them aren't speed daemons.

If you want Minix, an IBM clone/compatable or Atari ST is worth it in my
opinion.  The Mac is a beast of a machine to write standalone system software
for.

 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * ARPA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*/

dpi@loft386.UUCP (Doug Ingraham) (09/09/89)

In article <334@crash.cts.com>, jca@pnet01.cts.com (John C. Archambeau) writes:
> Minix on a Mac would be a very lousy platform.  Reason being that Macs don't
> have a notion what DMA is.  Run Minix on a Mac Plus and I guarrantee you it
> will perform worse than a 4.77 XT with a Western Digital controller w/DMA. 

IBM AT's and AT clones don't as a rule use DMA on the Hard Disk.  They do use
it on the floppy.  The reason this approach was chosen was in fact speed.
The data is transfered using the REP INPSW and REP OUTSW instructions which
of course tie up the CPU, but under DOS there is no multi tasking and the
transfer took less physical time than using DMA.  For a 512 byte transfer
(the sector size) it takes only 1549 clocks (194us @ 8mhz) for input and
1292 clocks (162us @ 8mhz) for output (assuming no wait states).  An
equivalent DMA would take a minimum of 320us excluding wait states and
bus transfer time.  Under a multi tasking system DMA would be preferable,
but as we know from experience not essential.
 
> Without DMA support in hardware, porting Minix to a Mac would be pointless. 
> But one thing that does come to mind.  If the Radius Accelerators support DMA
> with hard drive I/O, then the platform to a Mac with a Radius Accelerator
> would be worthwhile.

I have heard that Apple sells Unix for the MAC.  If it doesn't work then
who would buy it?

I am not a MAC fan, but this is not one of the reasons I wouldn't buy one.

-- 
Doug Ingraham (SysAdmin)
Lofty Pursuits (Public Access for Rapid City SD USA)
uunet!loft386!dpi

jca@pnet01.cts.com (John C. Archambeau) (09/11/89)

dpi@loft386.UUCP (Doug Ingraham) writes:
 
I know the xt_wini.c driver uses DMA.  I've torn that driver apart enough
times getting it to work (or attempting to) with my OMTI 5520A.  An
interesting thing is that my OMTI 5520A is an oddball controller when it comes
to setting up the controller for DMA I/O.  I know that FastBack will not work
with it and I have Minix currently going through bios_wini.c  The performance
isn't great, I have compiled the kernel in the background while using Zterm. 
But if we want to see Minix become a full blown implementation of Unix, it
will eventually support swapping and full DMA.  If the interrupt handling were
neater, DMA would be a requirement for Minix to be Minix.
 
>I have heard that Apple sells Unix for the MAC.  If it doesn't work then
>who would buy it?

Apple's version of Unix A/UX is more geared towards a Mac with an '030.  I
wouldn't run it on a old Mac (128, 512, or 512E), Plus, or SE.  

 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * ARPA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*/

Leisner.Henr@xerox.com (marty) (09/11/89)

I don't understand what the presence or absence of DMA has to do with the
viability of Minix on a platform.

You have to rewrite the device drivers in a way that's appropriate for the
platform.

marty
ARPA:	leisner.henr@xerox.com
GV:  leisner.henr
NS:  leisner:wbst139:xerox
UUCP:  hplabs!arisia!leisner

cy@dbase.UUCP (Cy Shuster) (09/13/89)

You may not want Minix on the Mac as your operating system of
choice for process control of your nuclear reactor in real
time without DMA... but it would still be very valuable to
peruse and tweak the whole source code of an operating system
for the beast! 

Please keep me posted (sorry) on its progress.

--Cy--

chasm@attctc.Dallas.TX.US (Charles Marslett) (09/13/89)

In article <23658@louie.udel.EDU>, Leisner.Henr@xerox.com (marty) writes:
> I don't understand what the presence or absence of DMA has to do with the
> viability of Minix on a platform.
> 
> You have to rewrite the device drivers in a way that's appropriate for the
> platform.

On most systems (non-VAXen, that is), devices do not have infinite bus width
access (and VAXen just pretend).  So a well written driver will not turn on
the DMA until the data is available, and if possible wil use a burst mode
to minimize wasted bus bandwith.  Similar arguments apply (for completely
different reasons) to channel/coprocessor based mainframes (IBM 360/370/etc).

As a result, a relatively efficient portable driver (that does not need to
support unbuffered synchronous I/O devices like the IBM PC floppy controller)
will wait to transfer a block of data until it is ready.  Further, if the
machine is doing lots of I/O intensive activity, and has more than one channel
(or DMA controller -- not DMA channel), it may want to wait for multiple
blocks to be queued in the buffer.  The difference between programmed I/O
and this tightly controlled DMA or channel transfer is not very significant
(usually the only important issue is interrupt latency, and that is a
characteristic of the hardware+OS, not just the hardware).

> marty
> ARPA:	leisner.henr@xerox.com
> GV:  leisner.henr
> NS:  leisner:wbst139:xerox
> UUCP:  hplabs!arisia!leisner


===========================================================================
Charles Marslett
STB Systems, Inc.  <== Apply all standard disclaimers
Wordmark Systems   <== No disclaimers required -- that's just me
chasm@attctc.dallas.tx.us

jca@pnet01.cts.com (John C. Archambeau) (09/13/89)

The issue of DMA is more of a performance issue.  We all prefer a 5 speed
manual transmissionto a 4 speed (at least I do).  The idea behind DMA is to
free the CPU from performing menial I/O tasks that do NOT have to be done by
the CPU.  I know that eventually Minix will be more or less a smaller GNU and
eventually will support such things as full swapping and all of the other
bells and whistles of a full blown Unix OS.  While you don't have to have to
have DMA to use Minix currently, the fact that many of us have our hard drive
controllers going through bios_wini.c is proof of that, you are giving up a
lot by not having DMA.  It's a lot of wasted CPU time thrown out the door.  I
admit that I am an over zealous Unix person, but I do know that as the price
on more powerful chips such as the 386 go down, the reality of Minix becoming
what a lot of us want it to be will happen.  

Of course, it might also deal with the fact I have a Sun 386i sitting my desk
at work along with a SPARCstation 1 on the way.  BTW, when is the SPARC port
of Minix expected to be out?  It sounds more tempting since Toshiba is
planning on making a SPARC laptop.  7 MIPS certainly makes my 16 MHz '286 box
at home in the dust.  

Just the thought of a Unix based OS on a machine without DMA is a taboo in my
opinion.  Not having virtual memory is bad enough.  :)


 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * ARPA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*/

henry@utzoo.uucp (Henry Spencer) (09/13/89)

In article <359@crash.cts.com> jca@pnet01.cts.com (John C. Archambeau) writes:
>The issue of DMA is more of a performance issue...

And a non-obvious one, at that.  Non-DMA systems can be faster than ones
with DMA.  If your processor keeps the bus pretty busy -- very likely
nowadays unless it's got nice big caches (the tiny ones on the 68030 do
not qualify) -- then stalling the CPU while the DMA device does its
transfers may be a net loss.  Most modern CPUs can do data copying at
full bus bandwidth (since they are usually faster than the bus), and the
protocol needed to exchange bus ownership can introduce considerable
overhead into DMA.  A non-DMA device with considerable buffering can be
a net performance win over a DMA one.  If you want an example, that's
how add-on Ethernet interfaces for Suns work.
-- 
V7 /bin/mail source: 554 lines.|     Henry Spencer at U of Toronto Zoology
1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

dpi@loft386.UUCP (Doug Ingraham) (09/15/89)

In article <359@crash.cts.com>, jca@pnet01.cts.com (John C. Archambeau) writes:
> The issue of DMA is more of a performance issue.  We all prefer a 5 speed
> manual transmissionto a 4 speed (at least I do).  The idea behind DMA is to
> free the CPU from performing menial I/O tasks that do NOT have to be done by
> the CPU.

Because of a botched design it would be terrible to use the motherboard based
DMA on the AT.  Here are the reasons.

1)  Transfer rate is poor.  According to the AT Technical Reference Manual
    Page 1-7 (this is the original 6mhz manual) under system performance.
    The DMA controller operates at 3 MHz, which results in a clock cycle
    time of 333 nanoseconds.  All DMA data-transfer bus cycles are five 
    clock cycles or 1.66 microseconds.  Cycles spent in the transfer of bus
    control are not included.  If we assume a 512 byte or 256 word transfer
    the fastest it can go is 425 microseconds.  The CPU will be running at
    about half speed during this time because of the DMA.

2)  Transfer for 16 bit operations must go to word boundaries.  See the
    description under 3 why this is bad.

3)  Transfer must not cross over a 64k memory boundary for 8 bit transfers
    or 128k boundaries for 16 bit transfers.  Because of limitation 2 and
    3 the DMA almost always takes place to a buffer on an even byte boundary
    and guaranteed not to cross a 64k boundary.  This requires the CPU to
    perform a block move of the data after the DMA operation is complete
    to put the data in the proper place.  A 256 word move takes an additional
    172 microseconds at 6 mhz.

4)  The controller already has a whole sector of data in its internal
    buffer when it interrupts the processor.  The data should be transfered
    at the maximum rate the bus can handle so that the next operation
    can be queued.

5)  The DMA lines are not even connected to the Hard disk part of the
    controller card.  This makes DMA real tough to use. :-)

The total time for a DMA would be more than 597 microseconds.  Quite a lot
more when you consider that the CPU must honor an interrupt on completion
of the DMA in addition to the interrupt generated by the Disk controller
when it has the requested sector.  If the processor handles the request
directly the times are 258 microseconds for read and 215 microseconds for
output which is slightly faster.  When all things are considered it is
better than twice as fast to use the Special REP INPSW and REP OUTSW
instructions that the 286 offers.

> Just the thought of a Unix based OS on a machine without DMA is a taboo in my
> opinion.  Not having virtual memory is bad enough.  :)

If we are talking about a good implementation of DMA I agree.  Unfortunatly
on the AT DMA should be used only for the floppy.


-- 
Doug Ingraham (SysAdmin)
Lofty Pursuits (Public Access for Rapid City SD USA)
uunet!loft386!dpi

jca@pnet01.cts.com (John C. Archambeau) (09/19/89)

dpi@loft386.UUCP (Doug Ingraham) writes:
>Because of a botched design it would be terrible to use the motherboard based
>DMA on the AT.  Here are the reasons.
>
>1)  Transfer rate is poor.  According to the AT Technical Reference Manual
>    Page 1-7 (this is the original 6mhz manual) under system performance.
>    The DMA controller operates at 3 MHz, which results in a clock cycle
>    time of 333 nanoseconds.  All DMA data-transfer bus cycles are five 
>    clock cycles or 1.66 microseconds.  Cycles spent in the transfer of bus
>    control are not included.  If we assume a 512 byte or 256 word transfer
>    the fastest it can go is 425 microseconds.  The CPU will be running at
>    about half speed during this time because of the DMA.
 
For a 6 MHz AT, you are right, but how many of us out there have a 6 MHz AT?
I don't.  I have a 16 MHz 286 box.  Also, the higher speed '286 chips have to
have to have a higher speed support chip set.  Another point, AT bus specs
vary greatly from manufacturer to manufacturer.  IBM didn't set the AT bus
technical specifications in stone, the manual you referred to applies ONLY to
a vintage 6 MHz guinine IBM AT.  A classic offender of varying greatly from
the AT bus spec was Kaypro with their first 10 MHz '286.  The bus ran at 10
MHz along side the CPU...as a result the higher speed cards were born.  The
majority of the '286 motherboards out there are designed to work in an 8 to 10
MHz bus speed (8 being more standard obviously).  Most of my AT cards have a
10 MHz crystal on them.  My motherboard is set up in such a way where I can
run the bus at either full or half CPU clock speed.  It's not set in stone
that such a performance loss would be incurred with a high speed '286
motherboard.  Especially one's with all Harris or AMD chip sets.  Intel is
pretty P'ed at AMD and Harris for cleaning up their mess and making a better
'286.  Won't be too long before the 25 MHz '286's hit the market if they're
not out already.  Such performance losses I'm sure were corrected with the
newer high speed support chip sets.
 
>2)  Transfer for 16 bit operations must go to word boundaries.  See the
>    description under 3 why this is bad.
>
>3)  Transfer must not cross over a 64k memory boundary for 8 bit transfers
>    or 128k boundaries for 16 bit transfers.  Because of limitation 2 and
>    3 the DMA almost always takes place to a buffer on an even byte boundary
>    and guaranteed not to cross a 64k boundary.  This requires the CPU to
>    perform a block move of the data after the DMA operation is complete
>    to put the data in the proper place.  A 256 word move takes an additional
>    172 microseconds at 6 mhz.

I never said that the 80x86 chips weren't brain damaged.  However, in most
cases a pointer will end up being a word or even a paragraph boundary anyway,
so it's not that big of an issue in my opinion.  A smart compiler can adopt
the convention that all pointers are on a word boundary.  The file system
part of the kernel can handle the 64K or 128K DMA problem.  It's just a matter
of dealing with those idiosyncratic annoyances out there that exist in all
machines.  I have yet to see an ideal architecture, everything has its
problems.

>If we are talking about a good implementation of DMA I agree.  Unfortunatly
>on the AT DMA should be used only for the floppy.
 
I do agree that I wouldn't implement DMA on a 6 MHz 286 (if you can find them
anymore).  However, if the later machines can handle it better as I suspect,
then why not use it?  Also, what about DMA on machine equipped with an EISA
bus or MCA?  Would it work better than the classic 6 MHz AT bus?  Most likely,
but then again, how many of us can afford the bus specs for MCA from IBM?

 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * ARPA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*
  * Note  : My opinions are that...mine.  My boss doesn't pay me enough to
  *         speak in the best interests of the company (yet).
  *--------------------------------------------------------------------------*/

chasm@attctc.Dallas.TX.US (Charles Marslett) (09/20/89)

In article <397@crash.cts.com>, jca@pnet01.cts.com (John C. Archambeau) writes:
> dpi@loft386.UUCP (Doug Ingraham) writes:
> >Because of a botched design it would be terrible to use the motherboard based
> >DMA on the AT.  Here are the reasons.

I beg to differ, when the PC was designed, the only reasonably available chip
was the 8037 -- so IBM built a computer that had real components in it.  If
you want an example of one that was built of unreal parts, look at the vintage
Atari computers (the video chip the computer was designed around was not really
manufacturable until the computer was almost obsolete!  So they built most of
them with a crippled subset chip... not so very different from planning to
use the cripple in the first place ;^).

> >1)  Transfer rate is poor.  According to the AT Technical Reference Manual
> >    Page 1-7 (this is the original 6mhz manual) under system performance.
> >    The DMA controller operates at 3 MHz, which results in a clock cycle
> >    time of 333 nanoseconds.  All DMA data-transfer bus cycles are five 
> >    clock cycles or 1.66 microseconds.  Cycles spent in the transfer of bus
> >    control are not included.  If we assume a 512 byte or 256 word transfer
> >    the fastest it can go is 425 microseconds.  The CPU will be running at
> >    about half speed during this time because of the DMA.
>  
> For a 6 MHz AT, you are right, but how many of us out there have a 6 MHz AT?

And for an 8 MHz bus, the CPU gets a fastest possible transfer of 2 clocks
(0.250 uS) and the DMA still gets a fastest possible transfer of 5 clocks
(or 0.625 uS).

For a 10 MHz bus, designed to work with at least a few commercially available
adapter cards, the numbers can get a bit closer, if you accept the fact that
the only floppy controllers you can use are the high dollar OMTI ones, then
the CPU transfers are either 2 or 3 clocks (.20 or .30 uS) and the DMA transfers
are still 5 (.50 uS).  And the one 12 MHz box I have ever seen did not do DMA
at all (the floppy controllers would only work if the buffer was in the
motherboard RAM, and even then OVERRUNS(?) were quite common.

> I don't.  I have a 16 MHz 286 box.  Also, the higher speed '286 chips have to
> have to have a higher speed support chip set.  Another point, AT bus specs
> vary greatly from manufacturer to manufacturer.  IBM didn't set the AT bus
> technical specifications in stone, the manual you referred to applies ONLY to
> a vintage 6 MHz guinine IBM AT.  A classic offender of varying greatly from
> the AT bus spec was Kaypro with their first 10 MHz '286.  The bus ran at 10
> MHz along side the CPU...as a result the higher speed cards were born.  The

Not very many were -- you cannot design a card to run with the Kaypro bus and
have it work with anyone else's.  The IBM AT bus spec is not cast in stone, but
the timings it used are -- that is why almost none of the busses in the newer
286 and 386 boxes run any faster than 10 MHz (can I say none, absolutely?).  If
you do, then the 8 and 16 bit slots are useless since no one makes a card that
will run in them (or will run in them very well, like the V7 VRAM card that is
a fast 16-bit card at 8 MHz, but on a fast 10 MHz bus, it turns into a slow
8-bit card).  Unlike the MCA bus, the ISA and EISA system busses are more-
or-less synchronous.  It is not very easy to change the timing without getting
a bad performance penalty.

> majority of the '286 motherboards out there are designed to work in an 8 to 10
> MHz bus speed (8 being more standard obviously).  Most of my AT cards have a
> 10 MHz crystal on them.  My motherboard is set up in such a way where I can
> run the bus at either full or half CPU clock speed.  It's not set in stone
> that such a performance loss would be incurred with a high speed '286
> motherboard.  Especially one's with all Harris or AMD chip sets.  Intel is
> pretty P'ed at AMD and Harris for cleaning up their mess and making a better
> '286.  Won't be too long before the 25 MHz '286's hit the market if they're
> not out already.  Such performance losses I'm sure were corrected with the
> newer high speed support chip sets.

We have perhap 15 different vendor's AT boxes running from 8 MHz to 25 MHz, and
only two of the whole set run the bus at 10 MHz (and those two default to 8
MHz).

Most add in cards (including every EMS card on the market and all but one of
the video cards that I have been able to get my hands on) will not run as 16-bit
memory at 10 MHz, some will not even run as 8-bit cards at that speed (with
2 extra wait states forced by the motherboard logic to make it "easy").

That, however, is not really a crucial point -- if the DMA processor runs at
half the CPU speed, the ratio of CPU transfer rate to DMA transfer rate is
5:1, if it runs on the same clock as the CPU the ratio is still 2.5:1 (or
a little worse, if you add in bus arbitration time -- say about 3:1).  So
programmed I/O still runs three times as fast unless you run the DMA chip
off a faster clock than the CPU (and thus, cannot use any of the motherboard
chipsets).  The only way to win is to go to a better DMA architecture (like
the 186 or 188 chips have) -- but then you cannot run any more than a limited
set of programs -- No MINIX, no XENIX, no Fastback, etc.

> >2)  Transfer for 16 bit operations must go to word boundaries.  See the
> >    description under 3 why this is bad.
> >
> >3)  Transfer must not cross over a 64k memory boundary for 8 bit transfers
> >    or 128k boundaries for 16 bit transfers.  Because of limitation 2 and
> >    3 the DMA almost always takes place to a buffer on an even byte boundary
> >    and guaranteed not to cross a 64k boundary.  This requires the CPU to
> >    perform a block move of the data after the DMA operation is complete
> >    to put the data in the proper place.  A 256 word move takes an additional
> >    172 microseconds at 6 mhz.
> 
> I never said that the 80x86 chips weren't brain damaged.  However, in most
> cases a pointer will end up being a word or even a paragraph boundary anyway,
> so it's not that big of an issue in my opinion.  A smart compiler can adopt
> the convention that all pointers are on a word boundary.  The file system
> part of the kernel can handle the 64K or 128K DMA problem.  It's just a matter
> of dealing with those idiosyncratic annoyances out there that exist in all
> machines.

This reduces itself to a memory-to-memory move though, and that is much worse
that block I/O (since you have to copy the data twice, now the ratio between
DMA and programmed I/O is an atrocious 4:1).

That is, if you do not allocate memory in blocks wholly within a 64K block, and
I leave it an exercise for the student (:^) to see how that affects fragmentation
and performance of the memory manager.

> I do agree that I wouldn't implement DMA on a 6 MHz 286 (if you can find them
> anymore).  However, if the later machines can handle it better as I suspect,
> then why not use it?  Also, what about DMA on machine equipped with an EISA
> bus or MCA?  Would it work better than the classic 6 MHz AT bus?  Most likely,
> but then again, how many of us can afford the bus specs for MCA from IBM?

Again, the DMA to CPU ratio is relatively unchanged -- faster machines just
mean that you may not need to worry as much about getting full performance
out of your box, but DMA is still a lot slower (so you wouldn't use it if
speed is critical -- you might if interrupt latency is).

Except, that is, for the MCA bus (perhaps).  I suspect that it is still
enough like the AT to impose most of the same performance penalties, especially
since IBM would like to sell the idea of bus mastering for peripherals, and
that becomes a whole lot easier if you make DMA a dog.

===========================================================================
Charles Marslett
STB Systems, Inc.  <== Apply all standard disclaimers
Wordmark Systems   <== No disclaimers required -- that's just me
chasm@attctc.dallas.tx.us