JBVB@AI.AI.MIT.EDU.UUCP (11/13/87)
.... If you are running an old release you should upgrade. -- WIN This is one of the problems faced by network managers and users. Upgrading them might be easy, if they were my machines, and their software was under maintenance. Not many manufacturers offer software upgrades on a "1 master per site" basis, and the fees I remember from my PDP-11 days are in the thousands of dollars a year. Most licenses allow users to copy the new software to many machines, but having only one set of current manuals breaks down if more than 5 or 6 people are using them, or they aren't close together. Regardless, there is a fair amount of effort involved in installing a new release, whatever the cost, and not many users of these machines are up to doing so themselves, even if they had time. Customization and O/S-version- dependent third-party software can make upgrading essentially impossible, even if attempted by the original installer. All of which is why many organizations which are setting up large networks want homogenous hardware, rigid version control, and source code. Perhaps the manufacturers should put on their thinking caps... jbvb
mckee@MITRE.ARPA.UUCP (11/20/87)
In your note to nowicki@sun.com you described the difficulty of installing a new release and concluded with: >All of which is why many organizations which are setting up large networks >want homogenous hardware, rigid version control, and source code. Perhaps >the manufacturers should put on their thinking caps... I would like to understand the underlying rationale for your recommendations concerning source code, rigid version control, and homogenous hardware. I offer my speculations below and would like you to confirm/revise. Source Code: So that an organization can fix/improve the code themselves, or by a third party, and not have to depend on the original vendor. Does your recommendation change if software maintenance is in effect? Rigid Version Control: This has more to do with the organization's network than with vendors. The organization doesn't want two different versions of the same process to be in use. Homogenous Hardware: This one troubles me - I would like to think that through the use of standard protocols an organization could achieve interoperability among different hardware suites. What is your view of the matter? Regards - Craig
JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen") (11/21/87)
... I offer my speculations below and would like you to confirm/revise. Source Code: So that an organization can fix/improve the code themselves, or by a third party, and not have to depend on the original vendor. Does your recommendation change if software maintenance is in effect? Depends on the situation. A university I know of has a couple hundred PCs off line waiting for software maintainers at one of two vendors to solve a TCP problem. Everybody is acting in good faith, and both sides are partly at fault. One vendor has delivered what they think is a fix, but it took 4 months. The other vendor had a new release, and that had to be tested to see if the problem went away, etc, etc. There are dozens of Unix vendors who are shipping 4.2's buggy, non-standard TFTP. How long will it be till they supply nameservers? Consider the vendors who spent years getting subnets working. Some of these long-standing problems may be helped by a host version of the "how to be a good gateway RFC" (1009?), and the Non-Interoperability questionaire which is supposed to get some attention at the conference in December, but I can't predict anything. Rigid Version Control: This has more to do with the organization's network than with vendors. The organization doesn't want two different versions of the same process to be in use. Correct. There is also an element of "we don't want anyone to modify it". Identifying which version is in use is simply one more step in identifying a problem, whether or not it has been encountered before. Homogenous Hardware: This one troubles me - I would like to think that through the use of standard protocols an organization could achieve interoperability ... It is generally true that via TCP/IP, two different hosts can communicate. Significant TCP bugs are getting pretty rare, although interpretations for things like urgent differ. Except for the Unix XPWD incompatibility, FTP does pretty well. SMTP has one widespread problem, discussed last week. Most Telnets interoperate well, until you start playing with options, or wondering why you are receiving parity (excuse my hobbyhorse). However, this is fairly nuts-and-bolts compared to what you can do between homogeneous systems. There are a number of essentially Unix-Unix protocols that add functionality to the basic ARPA suite, but they aren't well documented, and non-Unix implementations are extremely rare. Network printing, to take a prosaic example. Furthermore, most organizations have finite numbers of maintainers. For any number of maintainers, it appears that the tinkerers employed by the vendors can tinker at least as fast as the maintainers can learn/code/upgrade. The more vendors, the more high-level people you need, and high-level techies are the non-technical manager's nightmare. Whole sectors of the software industry are founded on the absolute un- willingness of many companies to employ permanently any techies more qualified than a computer operator, or application program maintainer. DEC has been moving VMS in their direction for years but few vendors have even begun on "Unix for the masses", and that represents more than half (vigorous hand-waving) of all TCP/IP hosts. Anyone who's ever hacked on sendmail.cf knows what a vale of tears a complex software system whose implementer is unavailable can be. So, the managers have mostly learned to keep things simple, generic where possible, but ultimately maintainable. The net I described in my first posting is still useable, so the maintainers and their managers are sticking to the problems they can solve, and relying on the influx of new, uniform machines to eventually eliminate the off-maintenance bad actors. I'm not saying I prefer this, but it looks pretty likely. Regards - Craig jbvb
hedrick@ARAMIS.RUTGERS.EDU (Charles Hedrick) (11/22/87)
> Source Code: So that an organization can fix/improve the code > themselves, or by a third party, and not have to depend on the original > vendor. Does your recommendation change if software maintenance is in > effect? We have yet to see an organization that fixes things fast enough to cope with our situation. When a problem shows up, it shows up in spades. Like suddenly we are using the whole bandwidth of a T1 line sending bogus name requests. Now and then we'll call an organization and have them say "oh yes, we know about this. What is your UUCP phone number?" But otherwise, it is "fixed in the next release" or at best fixed in a few days. That are situations where we would have to disable a crucial function during the interim. So I consider it crucial to have source for as much as possible.
bzs@BU-CS.BU.EDU (Barry Shein) (11/22/87)
Source code: I'd like to add that the critical need for source code, besides functional fixes, is security. It's one area where it's nearly impossible to find a vendor who provides what you need. Unfortunately needs are very often a customized affair, and security bugs often an outright emergency. It's can also be a moving target, something no one thought of yesterday suddenly, after one irate phone call, becomes first priority. Software "maintenance" is utterly the wrong concept for this sort of situation. -Barry Shein, Boston University
mcc@ETN-WLV.EATON.COM (Merton Campbell Crockett) (11/23/87)
Distributing source code seems to be inconsistent with the desire to enforce version controls. The availability of source code seems to be an attraction to the "tinkerer" much like a flame is to a moth--one's version control be- comes a historical artifact when the "tinkerer" gets access to the code. If the source is distributed on a media other than electronic as "documentat- tion", it is extremely useful and it is relatively easy to maintain version control of the software. Merton Campbell Crockett
bzs@BU-CS.BU.EDU (Barry Shein) (11/23/87)
>Distributing source code seems to be inconsistent with the desire to enforce >version controls. The availability of source code seems to be an attraction >to the "tinkerer" much like a flame is to a moth--one's version control be- >comes a historical artifact when the "tinkerer" gets access to the code. > >If the source is distributed on a media other than electronic as "documentat- >tion", it is extremely useful and it is relatively easy to maintain version >control of the software. > >Merton Campbell Crockett There are certainly better ways to do version control than to withold the sources. I've heard this argument for years and I still don't believe that the solution is OCO distributions, I don't even believe this is ever the real reason (vague fears of losing the technology are probably the real reason, either de facto [falling into a competitor's hands] or de jure [a court deciding you gave away the store], some of these fears have no rational basis, some do, but the conservative choice is obvious (even if it loses sales?!)) For example, one could simply demand, as with all warrantees, that software will not be maintained if monkeyed with (tho patches could be supplied everyone and they can do what they like with them.) To settle disputes it would be easy enough to provide a simple checksum program on the source. Whatever, but witholding the source has to be the worst possible solution to this (undisputed) problem. One thing I hate is vendors who won't even sell any source support (that is, you don't get the source patches for minor releases, so either you live with the bugs, obsolete your sources or guess how to fix the problem.) Vendors could also get more aggressive about these problems instead of sitting around getting into trouble (I have no doubt they do with large customers who get the sources, tinker, then demand support anyhow, money talks...) Usually when I get a source release I pays my money and that's that, a tape shows up, even if I already have maintenance on the software. I could see being asked to sign something which clearly states the new responsibilities now that you have sources. Heck, to trade options on the CBOE you have to sign a form acknowledging your lack of good sense. Seriously, how do you know they haven't mucked with the rest of the O/S, binary patched your sw, etc. Same problem. Microfiched source is not the answer either, unless you get some sort of satisfaction at merely looking at the buggy code that's bringing your system to its knees. Blechh. -Barry Shein, Boston University
melohn@SUN.COM (Bill Melohn) (11/23/87)
I feel it is bogus for a vendor to NOT offer source to those customers who have the local expertise to make use of it, and are willing to pay for the extra cost of its distribution. However, many if not most end users either do not want to do their own software maintenance or don't have a local support staff, and would rather pay the vendor for a turn key solution and phone line customer support. Within each organization, this is usually an economic rather than technical issue. Unix certainly represents the only real attempt to free the end user from vendor dependence on a particular operating system or machine architecture. It is far from perfect, but it does run on different machines from PCs to Crays, and goes farther than any other vendor implementation in providing an application and system interface that is offered by a wide range of computer vendors on widely different hardware.
JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen") (11/24/87)
In my own judgement, a vendor might be justified in restricting distribution of the source code for "version control" only under the following circumstances: 1. Vendor support personnel have access to source code themselves, and the skills and tools to use it. These people don't have to answer the phone, but you need to be able to get a call back in a day or two, at most. 2. These vendor support personnel work in an environment in which most or all customer problems can be conveniently duplicated. The 2nd condition rules out restrictions on most network source code, since not even IBM can maintain a current copy of *every* vendor's TCP/IP hardware and software, and thus there are many problems that can't be duplicated at the vendor's site. Maybe TCP/IP sites are exceptional, but the large TCP users we've sold source to have been quite well equipped with competent personnel, and I've found working with them to be quite productive. We have had to teach some of our OEMs quite a bit, but they weren't users when they started out. James B. VanBokkelen FTP Software Inc. PS: I've realized that I'm talking from sort of a vendor/management perspective. If this bothers a mostly (?) engineering list, tell me and I'll shut up except for technical issues.
henry@utzoo.UUCP.UUCP (12/03/87)
> For example, one could simply demand, as with all warrantees, that > software will not be maintained if monkeyed with... I dimly recall a software vendor who would sell you source *or* support but not both. They had the right idea. Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry
daveb@geac.UUCP (12/08/87)
In article <8712040041.AA00504@ucbvax.Berkeley.EDU> henry@utzoo.UUCP.UUCP writes: >I dimly recall a software vendor who would sell you source *or* support >but not both. They had the right idea. Two vendors (not necessarily of network code) which have this policy are Honeywell (now -Bull) and the University of Waterloo SDG. --dave -- David Collier-Brown. {mnetor|yetti|utgpu}!geac!daveb Geac Computers International Inc., | Computer Science loses its 350 Steelcase Road,Markham, Ontario, | memory (if not its mind) CANADA, L3R 1B3 (416) 475-0525 x3279 | every 6 months.