martelli@cadlab.sublink.ORG (Alex Martelli) (06/06/91)
knechod%peruvian.utah.edu@cs.utah.edu (Kevin Nechodom) writes (trying to decide between an HP730+2 X terminals, versus a SS2+2 SS IPC's): ... :I anticipate mostly database (don't know what yet) and stats (probably SAS) :applications. I have been told that Sparc floating point is abysmal, but that :DB and stats are mostly I/O intensive, and Sun is better than HP for I/O. ... :What a quandrary! What am I missing? I'd say you ARE missing an important point: an I/O intensive application will run VASTLY better if the disks holding the data are local to the CPU processing said data! Quite apart from criticisms that may be advanced against NFS, *NO* network filesystem running on a standard Ethernet is ever going to give a throughput substantially above 1 megabyte/second, is it now? Standard SCSI can give you twice that easily, HP is pushing the new SCSI-2 standard in implementations which should further double the throughput, and Sun itself (when it isn't trying to undercut the price of some competitive offer:-) pushes 'IPI', another high-performance standard (that's on SERVER-class machines, like the 470 and 490, not on "lowly" SS2s). Your proposed Sun config would probably have each IPC be "dataless" - just swap space and /tmp - and the database would reside on the SS2; thus, each database access from an IPC would have to go through the net (I'm not sure about the quality of IPC's net interface card, and software layers, but I'd bet you WON'T, EVER, see as much as a megabyte/sec of throughput!). By contrast, when a db transaction is started from the X terminal, just a few bytes describing the transaction would flow through the net, then a huge amount of data may be read and processed *locally* on the 730 at warp speed, then a small or moderate amount (some X drawing orders for a graph, say) will come back over the net. Of course, the best thing would be to benchmark your own application, but, in my general experience, it is far from rare that, when I sit at a workstation, opening an xterm from the fast machine with the big disks and running an I/O intensive operation there comes out quite faster than NFS-mounting the big disks and running the same operation from my ws! [Note: I haven't seen any of the crossposting you refer to on your original Newsgroups line, so I'm adding a crosspost to comp.arch (to see if anybody can confirm, or shoot down, my analysis), since it seems to me to be a general architectural problem rather than a specific hp/sun one, really. Other NGs might be quite appropriate (comp.databases, comp.protocols.nfs, comp.benchmark, what else?), but I'll leave any further augmentation of the crossposting, if any, to further discussants...] -- Alex Martelli - CAD.LAB s.p.a., v. Stalingrado 53, Bologna, Italia Email: (work:) martelli@cadlab.sublink.org, (home:) alex@am.sublink.org Phone: (work:) ++39 (51) 371099, (home:) ++39 (51) 250434; Fax: ++39 (51) 366964 (work only), Fidonet: 332/407.314 (home only).