eho@bogey.Princeton.EDU (Eric Y.W. Ho) (06/19/89)
Well, as I might have mentioned it in gnu.emacs and/or comp.sys.sun, basically I hope that it'll be general/flexible do the following things :- * When cheap multiprocessors desktop do arrive, I want to have a choice where if I've more money to spend then I can go out and buy several cpus, plug them into my desktop and assign 1 or 2 cpu to a few windows where I do either big compiles or running some heavyweight stuff and assign 1 cpu to do general system things and maybe 1 or 2 cpu for my code development/debugging. Of course, if I don't have the money then I'll just have to settle for 1 cpu to do all these tasks or go out to other nodes on the net and do let my big compiles to run there (except that very often, those other nodes are used by other people in the lab and they may not like such heavyweights banging on their workstations). * Should have some support network support like NFS, RFS or something better. Some sort of distributed system services are necessary -- e.g. large system databases. Servers (for system services) should be decoupled from disks or mass-storage devices. Basically, it'll be nice if you've a mass-storage subsystem that just understand nothing but nfs/swap/boot and is physically separated from the servers. The reason is that the current server-client model found in SunOS or NFS is too restrictive. When you take a server now, people on the clients now can't do anything anymore. The problem is worsen by the fact that Unix is very disk-oriented and when the server is down, the clients can't get to the system disks anymore !! What will be nice is that when a client needs a service, it'll simply yell out either to other servers on the net (e.g. need YP service) or to the mass-storage subsystem to boot, swap or do any other file/disk related activities. And because desktops are getting more powerful & cheaper year by year, what I want to see is that servers are just desktops that provide some special services and I can get a few of these in various people offices without buying the expensive rackmountables. And more importantly, people can configure & test various system/server stuff (or some weird custom software) on various servers without taking the net down and people can still do useful stuff on their desktop. The trick is really to have a separate mass-storage subsystem. And because it is smart enought to only run a few things, it's likelihood of crashing maybe less -- you can always have a few of these guys. Also, you can put multiple cpus & multiple ethernet ports in such a subsystem to increase throughput. In other words, you treat this subsystem as a massive but fast file-drawer where it handles all file/disk related requests from nodes on the net. * It should make use of optifloppy drives -- it is really a matter of convenience. I want to be able to get a system floppy, plug it in and go -- to hell with the system tapes -- they're too clumsy and it takes long time to install/configure the system. And when I've finished installing/configuring my system then I can do a floppy-floppy copy and save it so that when disaster strikes again, I can just get my other floppy from my drawer, plug it in and go. I mean considering the large size of system files/binaries/source + whatever 3rd party system-related software you've, putting them on tapes is just too clumsy and takes too long to config/reconfig (as in a disk crash) a system. If the optifloppy drives are too slow then you can put the user files and swap on magnetic disks and maybe cache frequently used system binaries/datafiles on memory so that you'll get the needed throughput. Well, that was my $0.02. Eric Ho Cognitive Science Lab., Princeton University email = eho@bogey.princeton.edu voice = 609-987-2819 -- regards. -eric-