crowl@cs.rochester.edu (Lawrence Crowl) (08/21/87)
crowl@cs.rochester.EDU (Lawrence Crowl) writes: >..., are you implying the 360 architecture was badly designed? This claim >will need VERY good arguments to over-ride 25 years (almost) of success. mjr@osiris.UUCP (Marcus J. Ranum) writes: )Stone hammers, along with flint knives, showed more success (in years) than )EBCDIC architecture, but nobody uses them anymore. Only the trailing edge of )technology still supports 360 architecture... Arguing that your flint axe has )had '2000 years of success' is not going to change the fact that the times )have changed. Do you also favor laser-optical card reader technology ? Yes, stone hammers and flint knives were used for a very long time. Their performance has not improved. Implementations of the 360 architecture have improved immensly. On the contrary, the leading edge of technology supports the 360 architecture. Some of the fastest scalar machines available are based on the 360. Yes, times have changed, but "well-designed" is relative to the time at which the design was done. Roman roads were well-designed. No one builds them any more, but they were still well-designed. dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: ]The misconception here is that a broad user base implies high quality or ]elegance of design. Instead of offering VERY good arguments, I will simply ]offer three counterexamples without further comment. ]1. The 8086 family of CPUs versus the 680x0 family of CPUs ]2. The National Enquirer versus the Wall Street Journal ]3. Family Feud versus the MacNeil/Lehrer Report I had no misconception, and these are not counter-examples. I did not state that something had to be well-designed to be popular. Nor are popular things necessarily poorly-designed. Popular and well-designed are loosely related. I am not necessarily stating that the 360 architecture was well-designed, but I am saying the architecture has shown flexibility and adaptability for many years. If you wish to say the 360 architecture is bad, you must show why its adaptability is illusory. The 360 architecture has been implemented on machines spanning roughly two orders of magnitude in performance. It has gone from physical memory to virtual memory. It supported a virtual machine long before many other architectures did. I repeat my statement: one needs VERY good arguments to claim that the 360 architecture was badly-designed. Anyone care to provide them or refute them? I have added comp.arch since they are likely to provide interesting input. -- Lawrence Crowl 716-275-8479 University of Rochester crowl@cs.rochester.arpa Computer Science Department ...!{allegra,decvax,seismo}!rochester!crowl Rochester, New York, 14627
lyang%scherzo@Sun.COM (Larry Yang) (08/22/87)
In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: >crowl@cs.rochester.EDU (Lawrence Crowl) writes: >>..., are you implying the 360 architecture was badly designed? This claim >>will need VERY good arguments to over-ride 25 years (almost) of success. >[....] >I repeat my statement: one needs VERY good arguments to claim that the 360 >architecture was badly-designed. Anyone care to provide them or refute them? >I have added comp.arch since they are likely to provide interesting input. From what I have heard, one reason for the IBM 360's success was forsight on the part of the designers. They decided that the machine should have 24 address bits. At the time, 16 MBytes seemed like a heck of a lot of memory. At the time, 16 MBytes *was* a heck of a lot of memory. But the architects recognized the trend that memory was getting denser; and as time wore on, those 16-bit address machines started falling to the wayside, whereas the 360 was able to just keep having its memory expanded. They designed in something that didn't become obsolete in just a few years. This example was given to me by a professor; I never really checked out the accuracy of this claim. I'm sure someone else out there can confirm/ deny this claim. ******************************************************************************** --Larry Yang [lyang@sun.com,{backbone}!sun!lyang]| A REAL _|> /\ | Sun Microsystems, Inc., Mountain View, CA | signature | | | /-\ |-\ /-\ Hobbes: "Why do we play war and not peace?" | <|_/ \_| \_/\| |_\_| Calvin: "Too few role models." | _/ _/
guy%gorodish@Sun.COM (Guy Harris) (08/22/87)
> From what I have heard, one reason for the IBM 360's success was forsight > on the part of the designers. They decided that the machine should have > 24 address bits. At the time, 16 MBytes seemed like a heck of a lot of > memory. At the time, 16 MBytes *was* a heck of a lot of memory. Unfortunately, one reason for what I presume was a big effort on the part of IBM (can you say XA?) was *lack* of foresight on the part of the designers; they decided that the machine should have 24 address bits. Unfortunately, this not only applied to things such as memory buses, it applied to effective address formation. As a result, everybody stuffed things into the upper 8 bits of pointers, since they weren't used; when they ran out of the 16MB virtual address space (by that time, it had virtual memory), they had to introduce a mode bit to permit old 24-bit-addressing applications and new 31-bit-addressing applications to run together. From reading some of the XA documentation, it seems they planned for further expansion, by using - wait for it - segmentation; it appears to have some similarities to that provided by a machine nearer the bottom of their product line. :-) Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com
dhesi@bsu-cs.UUCP (Rahul Dhesi) (08/22/87)
In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: >. . .one needs VERY good arguments to claim that the 360 >architecture was badly-designed. No stack, small segments, nonstandard character set with holes. -- Rahul Dhesi UUCP: {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi
rjh@ihlpa.ATT.COM (Herber) (08/23/87)
In article <1035@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: > In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: > >. . .one needs VERY good arguments to claim that the 360 > >architecture was badly-designed. > > No stack, small segments, nonstandard character set with holes. > -- > Rahul Dhesi UUCP: {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi 1. No stack: make one with existing instructions. 2. Small segments: 1 megabyte is not big enough? And the architecture transparently handles the transition from one segment to another. An instruction or datum can start in one segment and end in another; to the programmer it looks like one piece of memory. 3. Nonstandard character set: Check, I believe that EBCDIC is a standard; and, ASCII has its problems too -- numerics sort before letters. The 360/370 architecture is not tied to EBCDIC. The 360 architecture, in particular, had a bit in its PSW (program status word) to tell the hardware whether to generate EDCDIC or ASCII zones when converting from packed decimal to zoned decimal. 4. BTW {:-)}, this message came from an Amdahl 5890-300 (a 360/370 architecture processor) running UTS (tm-Amdahl) which is Unix (reg.tm-AT&T) System V Release 2 compatible (see also: SVID and SVVS). The character set is ASCII. Randolph J. Herber, Amdahl Sr Sys Eng, ..!ihnp4!ihlpa!rjh, (312) 979-6553, IH 6X213, AT&T Bell Labs, Naperville, IL 60566
guy%gorodish@Sun.COM (Guy Harris) (08/23/87)
> >. . .one needs VERY good arguments to claim that the 360 > >architecture was badly-designed. > > No stack, small segments, nonstandard character set with holes. He said "VERY good arguments"; these aren't. "No stack": what do you mean by "no stack"? There are no "push" or "pop" instructions, and the procedure call instruction saves the return address in a register, but so what? Nothing *prevents* you from implementing a stack. "Small segments": what do you mean by "segments"? The original 360 didn't have any sort of memory mapping. If you *really* mean "12-bit offsets", yes, that may be a nuisance, but it's not an insuperable problem, and it may have made sense given the design constraints of the day. "Nonstandard character set": considering ASCII was relatively new at the time (I'm not even sure to what degree ASCII *existed* in 1963!), this is simply bogus. "with holes": well, ASCII has holes, too; why aren't "0-9" and "a-f" or "A-F" contiguous? Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com
bcase@apple.UUCP (Brian Case) (08/23/87)
In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: >>will need VERY good arguments to over-ride 25 years (almost) of success. > Implementations of the 360 architecture have improved immensly. BUT NOT THE ARCHITECTURE. > >On the contrary, the leading edge of technology supports the 360 architecture. BUT NOT THE LEADING EDGE OF THE ART. >Some of the fastest scalar machines available are based on the 360. > >I am not necessarily stating that the 360 architecture was well-designed, but I >am saying the architecture has shown flexibility and adaptability for many >years. If you wish to say the 360 architecture is bad, you must show why its >adaptability is illusory. The 360 architecture has been implemented on The adaptability is not illusory. It is, however, bought at an extremenly high price. >machines spanning roughly two orders of magnitude in performance. It has gone >from physical memory to virtual memory. It supported a virtual machine long >before many other architectures did. On the contrary, the 360 (370) is (has been) more than an "almost" success. You are correct in stating that some of the fastest scalar machines are based on the 360 (370) architecture. But that does NOT mean anything. Take, for example, the EDGE 68010 implementation in six, huge, 256-pin PGA gate arrays. It is definitely a fast processor. The Amdahl 5860 and siblings are fast processors. However, those machines are, relative to recent offerings from a few sources, VERY expensive. They are compatible, yes, but painfully expensive. Within reason, it is possible to have fast implementations, virtual machine implementations, <your adjective> implementations; the trick is to have SMALL, CHEAP fast implementations, virtual machine implemenations, <your adjective> implementations. The MIPS Co. processor, SUN 4 processor, the Acorn RISC machine processor, the Am29000 processor, etc. have, at least, for some problems, performance equal to or greater than multimillion dollar machines, at prices orders of magnitude lower. The 360 (370) architecture was, for its time, perhaps not badly designed. However, its flaws, relative to the current state of the art, are readily apparent. If it were to be introduced today, most (at least most of the people *I* know who are concerned about such things) would call it a badly designed architecture. bcase
chuck@amdahl.amdahl.com (Charles Simmons) (08/23/87)
In article <1035@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: >In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: >>. . .one needs VERY good arguments to claim that the 360 >>architecture was badly-designed. > >No stack, small segments, nonstandard character set with holes. >-- >Rahul Dhesi UUCP: {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi What do you mean by "no stack"? Our C compiler uses a stack on our 370 architechture. Are you complaining that auto-increment/auto-decrement addressing modes weren't implemented? Do current RISC chips use these addressing modes? What does the character set that tends to be used with an architechture have to do with the architechture? I don't think we have any problems using ascii with our architechture... -- Chuck amdahl!chuck
bcase@apple.UUCP (Brian Case) (08/24/87)
In article <1035@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: >In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: >>. . .one needs VERY good arguments to claim that the 360 >>architecture was badly-designed. > >No stack, small segments, nonstandard character set with holes. Wait, is the character set part of the architecture!?!? I didn't think so, but.... Also, some of the best architectures, in my opinion, don't "have" any stacks either (what does it mean to "have" a stack?). The 4K byte addressability problem is real. The real problems with the architecture are related to system software interface issues and implementation ramifications of the instruction set definition, e.g. things like too few registers, two-address operations, hard-to-pipeline addressing modes, etc. bcase
bcase@apple.UUCP (Brian Case) (08/24/87)
In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes: > >No stack, small segments, nonstandard character set with holes. > > Wait, is the character set part of the architecture!?!? I didn't think so, But, I was wrong: I forgot about the character instructions (edit, etc.). These make assumptions about the character set, don't they? bcase
ken@argus.UUCP (Kenneth Ng) (08/24/87)
In article <1035@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: > In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes: > >. . .one needs VERY good arguments to claim that the 360 > >architecture was badly-designed. > No stack, small segments, nonstandard character set with holes. > -- > Rahul Dhesi UUCP: {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi Talking about holes, what are those characters between 5B hex and 60 hex in ascii? Surely they aren't part of the alphabet. But the character set has no meaning on architecture of a machine. I've seen an Amadal (which is an IBM mainframe work-alike) running UTS with an ascii character set. I'm pretty sure if you try you can get EBCDIC on a DEC machine. As for a stack, the 360 assembler is sophisticated enough to write macros that emulate stacks quite well, I use them all the time. As for small segments, that's where I must agree. Granted I don't have too many data structures larger than 4K, but it is a bit of an irritant. What I don't like about the 360 architecture is a lack of a one instruction load and/or store indirect. I've written macros to do the job, but it's still a bit of an irritant knowing that the instruction is not available. Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey 07102 uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp *** bitnet(prefered) ken@orion.bitnet
jay@splut.UUCP (Jay Maynard) (08/24/87)
In article <1580@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes: > crowl@cs.rochester.EDU (Lawrence Crowl) writes: > [...] the leading edge of technology supports the 360 architecture. > Some of the fastest scalar machines available are based on the 360. Yup. Just look at a 3090-600E. Blindingly fast, and will still solve the real-world problems that business faces daily. > Yes, times have changed, but "well-designed" is relative to the time at which > the design was done. Roman roads were well-designed. No one builds them any > more, but they were still well-designed. Actually, there will be a new runway installed at Houston's Intercontinental Airport (I think...been a while since I heard the news report) using Roman road-building technology. Seems that they think that the runway will last longer and be easier to maintain. > dhesi@bsu-cs.UUCP (Rahul Dhesi) writes: > ]The misconception here is that a broad user base implies high quality or > ]elegance of design. Instead of offering VERY good arguments, I will simply > ]offer three counterexamples without further comment. > ]1. The 8086 family of CPUs versus the 680x0 family of CPUs While people are bashing the 360 and 80x86 architectures, millions of businesses and people are getting real, useful work done on them. > ]2. The National Enquirer versus the Wall Street Journal > ]3. Family Feud versus the MacNeil/Lehrer Report > > I had no misconception, and these are not counter-examples. I did not state > that something had to be well-designed to be popular. Nor are popular things > necessarily poorly-designed. Popular and well-designed are loosely related. Yeah. Just look at Volvos and 680x0s. (BTW, have you noticed that people who drive Volvos, just as people who use 680x0s and Unix, are convinced that the rest of us are screwing up horribly if we don't follow their lead?) > I am not necessarily stating that the 360 architecture was well-designed, but I > am saying the architecture has shown flexibility and adaptability for many > years. If you wish to say the 360 architecture is bad, you must show why its > adaptability is illusory. The 360 architecture has been implemented on > machines spanning roughly two orders of magnitude in performance. It has gone > from physical memory to virtual memory. It supported a virtual machine long > before many other architectures did. > > I repeat my statement: one needs VERY good arguments to claim that the 360 > architecture was badly-designed. Anyone care to provide them or refute them? > I have added comp.arch since they are likely to provide interesting input. And those arguments will STILL fly in the face of practical, real-world problem solving. Business isn't interested in conceptual purity; they want their problems solved, now, and don't really care how they get that way - except that they won't throw away many years and millions of dollars of investment without a very good reason. Unix and VAXen haven't been good enough reasons. -- Jay Maynard, K5ZC...>splut!< | uucp: hoptoad!academ!uhnix1!nuchat!splut!jay "Don't ask ME about Unix... | (or sun!housun!nuchat) CI$: 71036,1603 I speak SNA!" | internet: beats me GEnie: JAYMAYNARD The opinions herein are shared by neither of my cats, much less anyone else.
stuart@bms-at.UUCP (Stuart D. Gathman) (08/24/87)
The original reference to 360 archecture referred to *software*, not hardware since the use of EBCDIC is primarily a software (and firmware in the case of printers and terminals) issue. The 360 is obviously a very good hardware design since so many people manage to do useful things with such terrible software. I think VM370 is elegant, but then IBM never did like it very much. The single user OS's that run on top of VM are, in general, still awful. NOTE - no hard facts here (except for the first sentence). Just opinions. I don't have time at the moment for hard facts, but my opinions are based on 9 years experience. -- Stuart D. Gathman <stuart@bms-at.uucp> <..!{vrdxhq|dgis}!bms-at!stuartis
henry@utzoo.UUCP (Henry Spencer) (08/24/87)
> I repeat my statement: one needs VERY good arguments to claim that the 360 > architecture was badly-designed. Anyone care to provide them or refute them? Ask any 360 compiler implementer about base registers. Wear your asbestos suit. -- Apollo was the doorway to the stars. | Henry Spencer @ U of Toronto Zoology Next time, we should open it. | {allegra,ihnp4,decvax,utai}!utzoo!henry
henry@utzoo.UUCP (Henry Spencer) (08/24/87)
> 2. Small segments: 1 megabyte is not big enough? > And the architecture transparently handles the transition > from one segment to another... I believe what was being referred to was not the way the MMU does segments, but the 12-bit addressing offset, which effectively gives you 4096-byte segments. The management of the base registers needed to address things within said segments is *not* transparent by a damn long sight. Pointer arithmetic, at least, uses a uniform address space, but ordinary addressing doesn't. > 4. BTW {:-)}, this message came from an Amdahl 5890-300 ... Amdahl builds fine implementations of a truly scummy architecture. -- Apollo was the doorway to the stars. | Henry Spencer @ U of Toronto Zoology Next time, we should open it. | {allegra,ihnp4,decvax,utai}!utzoo!henry
drw@cullvax.UUCP (Dale Worley) (08/24/87)
The 360 *architecture* is really clean and elegant (although some of the 370 and later extensions aren't as nice). It was the first machine language I learned. When I later learned pdp-11 machine language, I realized that the two shared a certain elegance... mostly revolving around general registers and symmetry of instruction structure. (Though the 11, using the concept of 'addressing modes' was much better in that regard.) Compare this with, say, the 8086, which has about 15 flavors of 'move' instruction. Now, the *software* that IBM put on the 360, on the other hand, takes absolutely *no* awards for design. The best proof of this is the success of VM/370, which (in its original incarnation) essentially places a raw 370 in the hands of the user. VM/370 is a better program development environment than MVS (nee OS/360), showing that a raw 370 is a better development environment than a 370 with MVS running on it. Dale -- Dale Worley Cullinet Software ARPA: cullvax!drw@eddie.mit.edu UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw OS/2: Yesterday's software tomorrow Nuclear war? There goes my career!
chuck@amdahl.amdahl.com (Charles Simmons) (08/25/87)
In article <1588@apple.UUCP> bcase@apple.UUCP (Brian Case) writes: > The >MIPS Co. processor, SUN 4 processor, the Acorn RISC machine processor, >the Am29000 processor, etc. have, at least, for some problems, performance >equal to or greater than multimillion dollar machines, at prices orders >of magnitude lower. > > bcase Anyone have an example of an application that runs faster on a MIPS, Sun, Acorn, or AMD machine than it does on either a 5890 or a Cray 2? Thanks, Chuck
madsen@vijit.UUCP (Dave Madsen) (08/25/87)
----- Sorry about the length, see last paragraph ------ In article <1590@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes: > In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes: > > >No stack, small segments, nonstandard character set with holes. > > Wait, is the character set part of the architecture!?!? I didn't think so, > But, I was wrong: I forgot about the character instructions (edit, etc.). > These make assumptions about the character set, don't they? > bcase 1) The holey character set issue is dead; see many earlier messages in this continuing saga. 2) The ED and EDMK instructions don't make assumptions about the character set, they specify edit mask characters and editing actions. Since these characters are replaced during the editing process, they obviously can't be in the final string, and so shouldn't be normally used "printable" chars. Ie, if "A" WOULD have been defined as an editing character, you would not have any "A"s in the final edited string, as they would have been replaced by digits. SO the designers made the editing characters low in the collating sequence, where there aren't any "printables". 3) The packed instructions are optimized for EBCDIC, and for dealing with overpunched signs in numeric fields. However, the sign zone rules are lax to the extent that if it ain't a 0xD, it's positive. (I seem to remember that there's a non-preferred negative zone, but I can't remember... 0xB?). For the machine I work on (whose peripherals "know" ASCII), that's a pain. But my machine mfr (Wang) has taken the UNPK insn and made it make '3' zones instead of 'F' zones. 4) About segments: NO NO NO NO NO. You have the wrong idea. I program daily in assembler on a Wang Labs VS 100 (which has taken 370 architecture and insn set and added stack, indirect call, and instruction-counter relative instructions), and it has NEVER EVER occurred to me to think of 4K OFFSETS as defining 'segments'. The 370 architecture defines a LINEAR address space. The target address (for one common instruction format) is computed by adding 2 registers and an offset. ANY register (except 0) may be used in this computation, not just some 'segment' register. Registers are general-purpose. No special registers, even for 'address' and 'data', let alone 'segments'. The 4k is not much of a limitation, as the coding style for the machine does not depend on a relatively fixed value in a register. To be more concrete, suppose I have a large array of structure. A register would typically be used to resolve to the array element, and any offset would address into the structure. Not many structures have over 4k worth of data. If you have data longer than that, you can always use another register, so that the 2nd register points to 4k past the first. Then you have 8k. The *whole idea* is that address manipulation in registers is easy, convenient, and natural (Please no flames on what natural is. Some would say that pre/post increment is natural, and for some machine architectures, they're right. Same idea here). I VERY SELDOM run into a program that has to use more than one register at a time to get more than 4k addressability for either code or data. Reenterability is easy. Subroutine calls (via BAL or BALR) usually result in the subroutine using a new 'base' register and saving the old one in a stack or linked list. The code is such that you usually don't see routines over 4k in length. Even for data addressability you usually don't find over 4k. Directly addressed items are put in the first 4k and other items (like data management control blocks, for example) are put later. The structure of the calls to the OS sometimes make it natural to put a pointer to that control block in the first 4k and use that. Suffice to say, I know that as a consciencious programmer, I feel guilty when I have to use more than one base register: It's simply poor technique, and it's NOT confining to use only one. Please no flames from those who work on special-purpose machines; this architecture was designed for general-purpose work, primarily business. I would be more than happy to converse AT LENGTH with any who would call me (312) 954 6512 or e-mail about this. Summaries (as if this hasn't been beaten to death already) could be posted if there's an idle newsgroup day. (Maybe in talk.bizarre) :-) Finally, I wish to apologize 1) for this message length, and 2) for having it in this newsgroup. I simply find that having been familiar with this architecture for 19 years, I have a lot to say to those who make postings who are less informed about or experienced with the architecture. Dave Madsen ---dcm ihnp4!vijit!madsen or vijit!madsen@gargoyle.uchicago.edu I sure can't help what my employer says; they never ask me first!
guy%gorodish@Sun.COM (Guy Harris) (08/25/87)
> VM/370 is a better program development environment than MVS (nee OS/360), > showing that a raw 370 is a better development environment than a 370 with > MVS running on it. Do you mean "VM/370" or "VM/CMS"? If the latter, it really shows that a 370 with CMS running on it is a better development environment than a 370 with MVS running on it (oops, typoed that as "a 370 with VMS" twice; one thing UNIX has going for it is that its name has neither a V, nor an M, nor an S in it). I doubt that a raw 370 is much of a development environment at all; toggling (turning?) programs in through the console switches (or the VM equivalent of same) can't be much fun. Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com
gwl@rruxa.UUCP (George W. Leach) (08/25/87)
In article <1580@sol.ARPA>, crowl@rochester.UUCP writes: > > I am not necessarily stating that the 360 architecture was well-designed, but I > am saying the architecture has shown flexibility and adaptability for many ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > years. If you wish to say the 360 architecture is bad, you must show why its ^^^^^ > adaptability is illusory. The 360 architecture has been implemented on > machines spanning roughly two orders of magnitude in performance. It has gone > from physical memory to virtual memory. It supported a virtual machine long > before many other architectures did. > I will not argue the architecture design issues. The 360 was the top of the line when it was introduced. I worked with one from 1980 thru 1983 and from a software development environment point of view (VM/CMS) it was terrible. UNIX is such a far superior programming environment to CMS that there is NO ARGUMENT here. What I would like to take issue with is the longevity of the 360/370 architecture. Is it really the adaptability and flexibility of the architecture or is it the fact that the huge customer base is tied into that IBM environment? There is a tremendous amount of $$$$ invested in COBOL, FORTRAN and PL/1 code on those beasts that CAN NOT be moved easily to another architecture. This is due to such nice IBM-ONLY features, such as EBCIDIC character sets. On the other hand, the $$$$ invested in code written in C under UNIX is easily ported (if written with portability in mind) to other architectures as they come along. Thus one can take advantage of new advances in computer architecture without the pain and cost of moving unportable code. > -- > Lawrence Crowl 716-275-8479 University of Rochester > crowl@cs.rochester.arpa Computer Science Department > ...!{allegra,decvax,seismo}!rochester!crowl Rochester, New York, 14627 George W. Leach Bell Communications Research New Jersey Institute of Technology 444 Hoes Lane 4A-1129 Computer & Information Sciences Dept. Piscataway, New Jersey 08854 Newark, New Jersey 07102 (201) 699-8639 UUCP: ..!bellcore!indra!reggie ARPA: reggie%njit-eies.MAILNET@MIT-MULTICS.ARPA From there to here, from here to there, funny things are everywhere Dr. Seuss "One fish two fish red fish blue fish"
sbanner1@uvicctr.UUCP (S. John Banner) (08/26/87)
In article <294@rruxa.UUCP> gwl@rruxa.UUCP (George W. Leach) writes: >In article <1580@sol.ARPA>, crowl@rochester.UUCP writes: > >> >> I am not necessarily stating that the 360 architecture was well-designed, but I >> am saying the architecture has shown flexibility and adaptability for many > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> years. If you wish to say the 360 architecture is bad, you must show why its > ^^^^^ >> adaptability is illusory. The 360 architecture has been implemented on >> machines spanning roughly two orders of magnitude in performance. It has gone >> from physical memory to virtual memory. It supported a virtual machine long >> before many other architectures did. >> > > I will not argue the architecture design issues. The 360 was the >top of the line when it was introduced. I worked with one from 1980 >thru 1983 and from a software development environment point of view >(VM/CMS) it was terrible. UNIX is such a far superior programming >environment to CMS that there is NO ARGUMENT here. > I hate to get into this argument, but I just couldn't hold back here. You say "NO ARGUMENT", however I know of one person who I work with (he is not on the net, he does ALL his work on CMS, and VS1), who I am sure would disagree. I know he has tried UNIX, and on several ocasions he has asked me why I like it (UNIX is my prefered enviornment, but I quite like VM/CMS as well). Just as a side note, he has also told me that he prefers 327x full-screen programming to windows on his Amiga at home, and does most of his programming in /370 assembler, and REXX (the system interpreter for those unfamiliar with VM/CMS). I do hope I havn't stepped on any toes here, because I don't really want to see this topic go on for annother month or two. It has been an interesting and to some extent informative discussion, however, I think it is beginning to degenerate (as do all of these discussions eventually). Thanks for listening (assuming of course you did), S. John Banner ...!uw-beaver!uvicctr!sol!sbanner1 ...!ubc-vision!uvicctr!sol!sbanner1 ccsjb@uvvm sbanner1@sol.UVIC.CDN
esf00@amdahl.amdahl.com (Elliott S. Frank) (08/26/87)
In article <294@rruxa.UUCP> gwl@rruxa.UUCP (George W. Leach) writes: > > What I would like to take issue with is the longevity of the 360/370 >architecture. Is it really the adaptability and flexibility of the >architecture or is it the fact that the huge customer base is tied into >that IBM environment? There is a tremendous amount of $$$$ invested in >COBOL, FORTRAN and PL/1 code on those beasts that CAN NOT be moved easily >to another architecture. This is due to such nice IBM-ONLY features, >such as EBCIDIC character sets. > > On the other hand, the $$$$ invested in code written in C under >UNIX is easily ported (if written with portability in mind) to other >architectures as they come along. Thus one can take advantage of new >advances in computer architecture without the pain and cost of moving >unportable code. > Having spent most of the past twenty years working in the 360/370 environment, "there is truly nothing new under the sun." It is as possible to write machine dependant code in C in a UNIX environment (cf the "how many bits are in an int, and which is the low-order one" discussion recently concluded in this newsgroup) as is to write portable COBOL in the EBCDIC MVS eenvironment. If you stick to a single machine architecture and operating system environment, machine-to-machine migration becomes a problem of power cables and air conditioning. This becomes a very powerful economic argument for sticking with that single machine architecture. VAX VMS has ensured its survival for the same reason. You can move an application from an 11/750 (running VMS) to an 8650 (also running VMS) with minimal porting effort. Despite the CISC aggregations that have grown up on the original 360 instruction set (Niklaus Wirth did not include support for the BXH and BXLE [decrement {increment} index and test against limit] instructions in his pioneering PL/360 structured assembler) (can you say "Compare and Form Codeword" or "Update Tree"?)) the longevity of the 360/370 architecture has come from the simplicity of most of the instructions. -- Elliott S Frank ...!{hplabs,ames,seismo,sun}!amdahl!esf00 (408) 746-6384 or ....!{bnrmtv,drivax,hoptoad}!amdahl!esf00 [the above opinions are strictly mine, if anyone's.] [the above signature may or may not be repeated, depending upon some inscrutable property of the mailer-of-the-week.]
peter@sugar.UUCP (Peter da Silva) (08/27/87)
> >equal to or greater than multimillion dollar machines, at prices orders > >of magnitude lower. > Anyone have an example of an application that runs faster on a MIPS, > Sun, Acorn, or AMD machine than it does on either a 5890 or a Cray 2? I don't know about the 5890 or the Cray-2, but the Sun 4 sure as hell beats out the Univac-I-mean-Sperry-I-mean-Unisys 1100/72 we're using here (and it's a multimillion dollar machine) so long as the number of users is small. -- -- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter -- U <--- not a copyrighted cartoon :->
drw@cullvax.UUCP (Dale Worley) (08/27/87)
gwl@rruxa.UUCP (George W. Leach) writes: > I will not argue the architecture design issues. The 360 was the > top of the line when it was introduced. I worked with one from 1980 > thru 1983 and from a software development environment point of view > (VM/CMS) it was terrible. UNIX is such a far superior programming > environment to CMS that there is NO ARGUMENT here. Eh? Here we're arguing about hardware architecture, and this guy starts arguing OSs architecture. Do you *really* mean '360', or do you really mean 'the software that's usually run on 360s'? After all, you can get Un*x for 360s now, and it looks just about like any other Un*x. (And if you went to the work, you could port OS/360 to the Vax!) yours for linguistic purity, Dale -- Dale Worley Cullinet Software ARPA: cullvax!drw@eddie.mit.edu UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw Apollo was the doorway to the stars - next time we should open it. Disclaimer: Don't sue me, sue my company - they have more money.
henry@utzoo.UUCP (Henry Spencer) (08/27/87)
> What I would like to take issue with is the longevity of the 360/370 > architecture. Is it really the adaptability and flexibility of the > architecture or is it the fact that the huge customer base is tied into > that IBM environment? ... In other words, the 360's longevity is not the result of the adaptability and flexibility of the architecture, but of the *un*adaptability and *in*flexibility of most of the 360 software. -- "There's a lot more to do in space | Henry Spencer @ U of Toronto Zoology than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry
ken@argus.UUCP (Kenneth Ng) (08/27/87)
In article <1590@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes: > In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes: > > >No stack, small segments, nonstandard character set with holes. > > Wait, is the character set part of the architecture!?!? I didn't think so, > But, I was wrong: I forgot about the character instructions (edit, etc.). > These make assumptions about the character set, don't they? > bcase That's why there is a bit in the PSW (on the 360 at least) that indicates whether the machine is using ASCII or EBCDIC. Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey 07102 uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp *** bitnet(prefered) ken@orion.bitnet
neil@dsl.cis.upenn.edu (Neil Radisch) (08/27/87)
>Eh? Here we're arguing about hardware architecture, and this guy >starts arguing OSs architecture. Do you *really* mean '360', or do >you really mean 'the software that's usually run on 360s'? After all, >you can get Un*x for 360s now, and it looks just about like any other >Un*x. (And if you went to the work, you could port OS/360 to the >Vax!) > >yours for linguistic purity, > >Dale Although technically true, in practice it just doesn't happen enough. Vaxes mostly run Unix and VMS, 360's VM, Cybers NOS or KRONOS etc. So from the point of view of a system user, the OS is representative of the entire computer family architecture. Sure you could put Unix on a Cyber but evey time I sit down to one, the damn thing is running NOS or KRONOS (can you say useless). -neil- (Actually I just wanted to do some Cyber bashing)
guy@gorodish.UUCP (08/28/87)
> > But, I was wrong: I forgot about the character instructions (edit, etc.). > > These make assumptions about the character set, don't they? > > That's why there is a bit in the PSW (on the 360 at least) that indicates > whether the machine is using ASCII or EBCDIC. ASCII-8, anyway; I don't remember whether that was compatible with ASCII or not. That bit is gone in the 370 (it was used for something else, possibly the "basic control"/"extended control" mode bit). Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com
rick@pcrat.UUCP (rick) (08/29/87)
In article <8493@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) writes: > In other words, the 360's longevity is not the result of the adaptability > and flexibility of the architecture, but of the *un*adaptability and > *in*flexibility of most of the 360 software. Naw, its just a fast, affordable SOLUTION. We almost went with two VAX 8700 in our latest processor upgrade. But it looks like '370 arch will win the price/perf war on this end. The software we run? UNIX. Yes, you can teach an old dog new tricks. -- Rick Richardson, President, PC Research, Inc. (201) 542-3734 (voice, nights) OR (201) 834-1378 (voice, days) seismo!uunet!pcrat!rick
gwyn@brl-smoke.ARPA (Doug Gwyn ) (08/29/87)
The C newsgroup is clearly not the right place to be discussing what the most cost-effective computer hardware is. This discussion has had nothing to do with C for quite some time and should be carried on in comp.arch instead. Thanks.
ken@argus.UUCP (09/01/87)
In article <572@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes: > I don't know about the 5890 or the Cray-2, but the Sun 4 sure as hell beats > out the Univac-I-mean-Sperry-I-mean-Unisys 1100/72 we're using here (and it's > a multimillion dollar machine) so long as the number of users is small. I think RCA was also in there at some time. Uh, how many people and tasks is the 1100 running compared to the Sun? One of the things that has always bugged me about some "my computer is faster than yours" is when people compare something like a VS90/80 with about 100 people on it with a VAX/750 with 2 people on it, and say that the VAX is a better machine because its response time is faster. Note: true the 1100 cost a lot more than the Sun, but I believe as many factors should be accounted for as possible. > -- > -- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter > -- U <--- not a copyrighted cartoon :-> As an aside, has anyone figured out if the company name changes of Unisys are progressing somewhere besides confusion? Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey 07102 uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp *** bitnet(prefered) ken@orion.bitnet