STONE@SUMEX-AIM.STANFORD.EDU (Jeffrey Stone) (03/19/88)
A few questions about the deployment of X-based applications in the future: 1. With workstations gaining computational power rapidly (e.g. through RISC developments), in the future will most use of X be in situations where the server and client both run on the same computer, generally a workstation? I believe that is the case today. Is it expected to change significantly in the next few years? 2. Is there likely to be growing interest in low-priced X Servers (diskless PCs running an X server application) networked to powerful multi-user computers which perform computation for many users? I have difficulty seeing this as a generally interesting configuration for technical applications except in those relatively few cases where supercomputer power is required. 3. Does anyone see interest in more business-oriented applica- tions (e.g financial workstations) low-priced X terminals networked to multi-user client systems? 4. In cases where a single computer serves as the client for a number of X terminals, will application developers want to add special features to their applications to optimize for the multi-user environment? In the commercial world, we have examples of this in things like CICS which is, among other things, an efficient multi-terminal handler running in a multi- user operating system. I know this is a long way from the X world, but are there things applications may want to do to take advantage of the multi-user environment? 5. When a user runs an application where the client and the server share the same physical system (workstation), does the overhead of client-server communication add much computa- tional load compared against non-distributed architectures? In other words, is X an appropriate architecture when user interface and computation are both run on a workstation? I will appreciate your responses. Jeffrey Stone Menlo Park, CA -------
bzs@bu-cs.BU.EDU (Barry Shein) (03/21/88)
> 2. Is there likely to be growing interest in low-priced X Servers > (diskless PCs running an X server application) networked to > powerful multi-user computers which perform computation for > many users? I have difficulty seeing this as a generally > interesting configuration for technical applications except in > those relatively few cases where supercomputer power is > required. This has become a popular view as the workstations rival the minis. I believe it will once again become fallacious. The problem right now is that a lot of the "mini" vendors are falling on their faces. The next two years or so should bring, at the very least, several flavors of small scale, transparent, parallel processing "minis" extending towards 1000MIPs (certainly hundreds of MIPs will be common) and costing a very few $100K. This will renew interest in the "departmental level" machine for many. Another issue is disk, large imaging requires large disks, someone on Unix-wizards was recently talking of regularly processing 400MB image files (400MB per file.) It's generally inconvenient to have disks which can hold that kind of data in one's office, especially if you ever expect to do backups. I suppose one could say they'll never do that and they're probably right, but things are scaling up all over. The other suggestion might be "isn't that a remote file system issue?", yes, to some extent, tho moving hundreds of MB thru even fast networks is sometimes a poor trade-off strategy, better to just ship the final image (like I said, "it depends".) It seems to me that server/client strategies extend these options at a relatively small cost rather than limiting them. And given the assumption of faster and faster workstations and the relatively fixed nature of the overhead of splitting the imaging process it seems the argument tends to go the other way. But it is an interesting argument. -Barry Shein, Boston University