ts@cup.portal.com (Tim W Smith) (11/11/90)
I've seen three different methods used to boot non-DOS operating systems on 386 machines. One (used by Netware 386) is for DOS to boot, and then a DOS application is run which loads the OS. The OS is stored as a DOS file on a DOS partition on the hard disk. The second, which was used by 386/ix 1.0.6 (the last version of 386/ix that I used) is for the boot program that is loaded from the Unix fdisk partition to read in the OS from the Unix filesystem. This boot program uses the INT 13h BIOS function to perform the disk I/O. INT 13h is used up until the OS is loaded, at which point the Unix disk driver is used for any subsequent disk access. This is also the method used by Netware 286. The third, which is used by SCO Unix 3.2, is for the boot program that is loaded from the Unix fdisk partition to load the OS, or load other boot programs from the Unix partition that load the OS (I'm not quite sure which boot program does what here). At some point before the OS is loaded, the boot programs start manipulating the disk hardware directly. I suspect that they also switch the processor to protected mode before loading the OS. Why is this third method used? It seems to me that the first or second methods are vastly superior. Consider, for example, trying to add support for a new disk controller. The vendor of the disk controller starts out by producing a Unix disk driver that knows about their controller, and they link this into the kernel. Assume that the disk controller is also meant to work with DOS, so it contains a BIOS ROM that installs an INT 13h handler at power up time before the boot sequence begins. If the first or second boot schemes mentioned above are used, then the controller vendor does not have to do anything else to allow Unix to boot off their controller. Compare this to the third method. Unless the controller vendor can convince the OS vendor to modify the boot code to recognize the new controller, the new controller can only be used to support secondary disks, or the system must be booted from a floppy and convinced that rootdev, swapdev, etc., are on the new disk (not a problem, at least, with SCO, since their boot program takes these as arguments). So, why do people use the third method? Do any of the Unix vendors use the first or second in current releases? Is it really as big of a problem as I think, or am I just prejudiced because I write SCSI software, and have to keep explaining to my boss that it's not my fault that Unix can't boot ("The guys doing DOS, OS/2, Netware 286, and Netware 386 are all booting! Why can't you?") :-) Tim Smith
james@bigtex.cactus.org (James Van Artsdalen) (11/12/90)
In <35811@cup.portal.com>, ts@cup.portal.com (Tim W Smith) wrote: > The third, which is used by SCO Unix 3.2, [...] > At some point before the OS is loaded, the boot programs start > manipulating the disk hardware directly. > Assume that the disk controller is also meant to work with DOS, so > it contains a BIOS ROM that installs an INT 13h handler at power up > time before the boot sequence begins. The reason has to do with disk drives that have more than 1024 cylinders. The BIOS interface cannot support such a configuration. Many controllers spoof the drive geometry by claiming to have more heads, or more sectors per track, and then claiming to have fewer cylinders. Even this won't work if a drive is bigger than 512meg, because the BIOS only supports 20 bits of sector number per drive. The problem is how to support large ESDI or SCSI drives, which can be much bigger than 512meg. If you call the BIOS, you can read from devices that hook INT 13h, but if you do that, then you can't read the entire drive. Worse, when the kernel is up and talking to the WD-1010 interface of the controller, any translation that the INT 13h hook was doing is lost, and the kernel might not see things in the same place that the boot code saw it. As a practical matter, this is why it's a bad idea to have one big partition on a boot drive. If you do, it is very possible that any given kernel rebuild will yield a kernel that's partly beyond the 512meg mark on the drive, and the next reboot will fail. -- James R. Van Artsdalen james@bigtex.cactus.org "Live Free or Die" Dell Computer Co 9505 Arboretum Blvd Austin TX 78759 512-338-8789
ts@cup.portal.com (Tim W Smith) (11/12/90)
Finding out what mapping was being used by INT 13h shouldn't be a problem. The boot code would ask the INT 13h code for this and pass it to the kernel. I would be satisfied, I suppose, if Unix vendors would make the object code for their boot stuff available as part of the kernel reconfiguration stuff, and document how to link in a driver with the boot code. Tim Smith
james@bigtex.cactus.org (James Van Artsdalen) (11/12/90)
In <35858@cup.portal.com>, ts@cup.portal.com (Tim W Smith) wrote: > Finding out what mapping was being used by INT 13h shouldn't be > a problem. The boot code would ask the INT 13h code for this > and pass it to the kernel. The BIOS doesn't have a call to return the mapping, just the apparent geometry. While this is going to be almost always sufficient to deduce the mapping, it's not guaranteed. For example, right now bigtex is running a configuration where the apparent geometry is one sector per track greater than what's actually used, and BIOS claimed cylinders half reality & the sectors per track twice (after the addition). It's a long story as to how this came to be, and I've never bothered to fix it. -- James R. Van Artsdalen james@bigtex.cactus.org "Live Free or Die" Dell Computer Co 9505 Arboretum Blvd Austin TX 78759 512-338-8789