[comp.arch] stack machines

rminnich@udel.EDU (Ron Minnich) (06/04/88)

In article <5141@nsc.nsc.com> stevew@nsc.UUCP (Steve Wilson) writes:
>I don't want to start any religious wars about comparing a Burroughs
>MCP against IBM's JCL that was available in the late 1970's.  But 
Well, the fact that burroughs was behind the state of the art does
not imply that it was worse than IBM, near as i can tell. I used the
MCP a lot, and yes it had lots of things even then that I would
like to see on Unix now; nevertheless it was far behind the
state of the art by 1975 or so. And the stack machine architecture
had similarly lost its early lead. Those of you familiar with 
E-mode on the burroughs machines know that it basically was a kludge
much like 286 segmentation in design!
>As for Burroughs(aka Unisys) not being interested in stack machines 
>anymore, well they sure seem to be concentrating pretty hard on the
>A-series boxes.  Last time I checked, this series was stack based. 
Yes, it is, I know cause i worked on the A15! There is a large market
of people who rely on that architecture mainly for historical reasons. 
But all the market surveys indicated that the B-series and A-series
were not going to set the world on fire sales-wise, and in fact
would be a decreasing share of the market. 
So for a long time to come you will see cost-reduced higher-performance
implementations of this architecture. 
That does not imply that 
Unisys sees its future development occurring in this area; in fact,
from what i see, they seem to like SPARC ...

-- 
ron (rminnich@udel.edu)

dlsc1032@dlscg1.UUCP (Alan Beal) (06/08/88)

In article <2868@louie.udel.EDU>, rminnich@udel.EDU (Ron Minnich) writes:
> Well, the fact that burroughs was behind the state of the art does
> not imply that it was worse than IBM, near as i can tell. I used the
> MCP a lot, and yes it had lots of things even then that I would
> like to see on Unix now; nevertheless it was far behind the
> state of the art by 1975 or so.
> ...
> But all the market surveys indicated that the B-series and A-series
> were not going to set the world on fire sales-wise, and in fact
> would be a decreasing share of the market. 

At my place of employment, we have 4 B7800's and I would say from my
experience that the problem with Burroughs is not its hardware but its
lack of development in software.  After all, most application programmers
and end-users are not too concerned whether the architecture is stack based
or not, but are more concerned with the software capabilities of the system.
Burroughs still has not come out with a relational database package(SIM is a
semantic database sitting on top of DMSII, not relational) and DMSII does not
support SQL.  Also, the number of software vendors out there writing software
for Burroughs machines is dismal at best.  This is too bad since Algol is a
nice development language even though it does not support data strucutures
other than arrays.

The B7800 is no longer being produced and the current software release(3.6) will
be its last.  Our main problem with this machine is the lack of addressable
memory - 2**20 words or 6 Mb.  Our course, this has been solved with the
ASD memory in the A series machines.  With ASD memory, a program is limited
to 1024K data structures where a data structure is a file, database, array,etc.
An array is limited to 2**32 words.

I would argue that the MCP is a fairly sophisticated operating system that is 
fairly easy to work with.  For example, multitasking and inter-program
communication are easy to program.  Algol is the predominant systems
programming language and there is little need to learn machine code.  I have
always felt that if Burroughs had developed UNIX, the world would be turning to
Algol instead of C as the language of choice.

-- 
Alan Beal   DLSC-ZBC                 Autovon    932-4160
Defense Logistics Services Center    Commercial (616)961-4160
Battle Creek, MI 49015               FTS        552-4160
UUCP:  {uunet!gould,cbosgd!osu-cis}!dsacg1!dlscg2!abeal

haynes@ucscc.UCSC.EDU (99700000) (06/10/88)

>people wanted that to become a variable sector size, and that is what is
>going to take them so long to implement.  Not exactly modualar software.

Gee, seems like only yesterday that IBM had just discovered the
merits of fixed sector size, after Burroughs and DEC had been using
it all these years.  All the times that users have had to completely
re-format their files after some new model IBM disk replaces an
older one...  But maybe you're talking about something different.

>  Algol 68 does support Data structures.  And most smart companies have
>upgraded their systems to Algol 68 or beyond.  Why Unisys hasn't, I don't know.

Errr, which companies do you have in mind?  I don't pretend to be a
walking catalog, but I haven't heard anyone mention Algol68 in the last
15 years or so, much less try to sell me a compiler for it.
>
>>I would argue that the MCP is a fairly sophisticated operating system that is 
>
>  I wouldn't. :-)

Well there are different meanings to "sophisticated", but for real
sophistry you need to see a system that uses a lot of letter
abbreviations like JES and MVS and VS1 and VM and CMS and OS and ...
where each flavor needs a different compiler and file system.
>
>  Even Unisys is moving towards knowing Machine code, they have a new 
>piece of software called DumpAnalyser, they seemed to feel the need to spend
>three weeks teaching me how to use it {they do this for all new employee's}.
>And if you think what it puts out is Algol code you are sadly mistaken, it's
>basicly for "reading" the stack of a program, and if that's not machine code

Gee, back in B5500 days we had a dump analyzer that would print out the
stack and various MCP tables so you could almost read it.  In contrast
to certain other machines of the same period that would just give you
pages and pages of pure hexadecimal in neat columns, but you had to
figure out yourself where the structures were.  Or buy a third-party
dump formatter for lots of money.  But that was just for use with
MCP dumps.  For user programs you never needed dumps because the
abort message told you exactly where to look in the program listing
and gave you the reason for the abort in English.  I guess things have
gone downhill since then.


haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

stevew@nsc.nsc.com (Steve Wilson) (06/10/88)

In article <3147@polyslo.UUCP> dorourke@polyslo.UUCP (David O'Rourke) writes:
>different versions.  The MCP is such an old design that a recent design
>document estimated that it would take 180-200 man years to change the way
>sectors were handled.  Right now MCP assumes 180-bytes per sector, well so
>people wanted that to become a variable sector size, and that is what is
>going to take them so long to implement.  Not exactly modualar software.

There are hysterical reasons for having 180 byte sectors.  As I recall
even the ancient and honourable B1000 series used the same sector size
implying that it is a company media standard.

It maybe time for a change in such things, but when you're fighting 
25 years of history there is alot of momentum to do things a certain
way.  

>
>  But for the first time in years Unisys is getting the "new breed" of 
>programmers in, and most all of the people that I was hired on with are out
>to change the way Unisys does software.  Most of the new people I worked
>with were as upset as I was about the lack of software tools, and they are
>working rather quickly to fix it, just before going back to school I was
>playing with an editor that I'd actually consider using.  It was written
>by one of the new programmers in his spare time and frustration with the 
>current editor.
>

Now CANDE, etc ain't that bad!  Just compare it to what was available
before such things.  I KNOW you haven't had to use PUNCH CARDS!  This
again isn't saying there isn't alot of room for improvement.  Heck, I
was yellling and screaming at the large systems human interface
clear back in 1979!  Just to get sentimental on ya, when I was in
school we had this old, ancient 360 model 50.  It could run a couple
of jobs at one MAX.  The Burroughs MCP's where multi-programming at 
the same time that OS/360 was released back in the early 60's.  There
was a point in time where the MCP was a beautiful thing to behold.
The ALGOL compiler could compile a couple of thousand lines a minute
versus the stuff that was available else where which were running
in the hundreds of lines ball park.

Now, I've heard some friends of mine, who I have it on good authority
where highly responsible for much of the early work on the MCP 
lament about the monolithic structure of the MCP. Hind sight is always
20/20 you know! 

>  Also people are taking a good hard look at MCP and wondering what to do 
>about it.  So you might see some significant changes in the next few years.
>But I really don't know, it was just mostly shop talk at the water cooler,
>but the ideas, and frustration with the current system, are there so there
>might be a change.
>
>  Anyways there's a recent Unisys employee's observations
>
>  If anyone thinks I speak for Unisys they need some mental help!!
>
>-- 
>David M. O'Rourke
>
>Disclaimer: I don't represent the school.  All opinions are mine!

Well, David, maybe you can help bring an old architecture back to life.
Besides, Mission Viejo is a MUCH nicer place to live than say...
Pasadena!

Steve Wilson
National Semiconductor
[EL grad '79]

[ Universal disclaimer goes here!]

aglew@urbsdc.Urbana.Gould.COM (06/13/88)

..> Talk about the history (and future) of Burroughs/Unisys machines.

I showed some of this discussion to a guy I know who used to work for
Unisys, and here was his response ("he" was one of the earlier posters):

He is basically correct about the user interface for debugging and for
editing on the A series.  

He is wrong about the hardware.  The A-Series hardware pays a great
penalty for being stack oriented.  It is bigger 3m gates.  It has
bigger code files (because of all those extra pushes and pops.  It uses 
a lot of gates to decompile programs into an internal 3 address rr machine.

A-Series hardware spend much of its time making up for the stack, and
tags architecture.  If the same pounds of hardware and technology were
used to make a more riscy machine, it would run at least 3 times faster.

The disaster at unisys was that durring the late 70s.  It the computer
systems area was run by a manufacturing person and his chronies.  The
belief was that just reproduce the old design, concentrating on
packaging and cost, and technology will take care of speed. (the same
silly trap many risc people are falling into).  This was made worse by
the religious zelotry of the next level bellow him, who felt that the
instruction set was what made the systems successful.

This is indeed another case of an organization that didn't know why
its products were successful.  In my opinion, what made the product
successful was: 1. Development in and for a higher level language made
thoe original system very usable, and programmer oriented; 2. A small
development team helped make it simple; first comercial use of virtual
memory made it easy to program, and administer(job schedule); 3. First
true, clean, tightly coupled shared memory multiprocessing made
performance incremental for most DP type workloads; 4. A very clean
and rational I/O subsystem for its day (descriptors rather than
chained i/o commands) + fast head per track disk + fixed sector size
disks + device exchanges which allowed several channels and
controllers to access the same disks + striping files accross multiple
spindles and controllers. (all in the late 60s) This combined for a
high I/O performance I/O subsystem which was easy to use and administer.

In the early 70s they came out with DMS the first full featured database
management system.

In the late 70s the 6800 came out which was a disaster with its ill considered
shared global memory, with local memories.  This along with the religion
and management problems led to Burroughs sitting and waiting while one by one
their good features were "invented" by IBM.  Now they are behind.

Its not as if in the mean time they made the system better, but rather
they added layers of software and complexity that stoped the system
from being easy to use.

jps@wucs2.UUCP (James Sterbenz) (06/14/88)

In article <3693@saturn.ucsc.edu> haynes@ucscc.UCSC.EDU (Jim Haynes) writes:
>Well there are different meanings to "sophisticated", but for real
>sophistry you need to see a system that uses a lot of letter
>abbreviations like JES and MVS and VS1 and VM and CMS and OS and ...
>where each flavor needs a different compiler and file system.

I personally find the 5000/6000/7000/A architectural basis very elegant,
but c'mon, get your history right ... 
these are not comparable...

OS/MFT -> OS/MVT -> OS/VS (later VS1) -> VS2 (SVS) -> MVS -> MVS/XA
even though some of these have been available concurrently
(MFT and MVT,  VS1 and MVS) for periods of time, it is essentially
a sucession of version of the SAME system -- of the above list,
the ONLY currently available versions are MVS(/SP) and MVS/XA.

It would have been like having 5500 MCP, 6500 MCP, MCP MARK whatever ...
for a similar list.  If you read an MCP manual or system architecture manual
you'll find PLENTY of obnoxious acronyms (e.g. MSCW).

By the way ...

JES (Job Entry Subsystem) is a part of MVS (optional in the OS days,
but now required: JES2 from HASP, JES3 from ASP).

VM is a completely different operating system consisting of
VM, the hypervisor (CP-44 -> CP-67 -> CP) and various
guest operating systems, of which CMS is one.   MVS can run under CP.   

haynes@ucscc.UCSC.EDU (99700000) (06/14/88)

In article <853@wucs2.UUCP> jps@wucs2.UUCP (James Sterbenz) writes:
>In article <3693@saturn.ucsc.edu> haynes@ucscc.UCSC.EDU (Jim Haynes) writes:
>>Well there are different meanings to "sophisticated", but for real
>>sophistry you need to see a system that uses a lot of letter
>>abbreviations like JES and MVS and VS1 and VM and CMS and OS and ...
>>where each flavor needs a different compiler and file system.
>
>I personally find the 5000/6000/7000/A architectural basis very elegant,
>but c'mon, get your history right ... 

Guess I should have put a ;-)   after "sophistry" -  Of course, the B5000
and its successors are the only machines I would call "elegant".  All these
letter abbreviations are just what I hear coming over the partition from
co-workers trying to get the latest machine from that three-letter company
into production.

For further reference in that particular debate see the article "A Tale of
Two Computers"  in the May, 1977 issue of IEEE Computer magazine.

haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

elg@killer.UUCP (Eric Green) (06/15/88)

In message <853@wucs2.UUCP>, jps@wucs2.UUCP (James Sterbenz) says:
$In article <3693@saturn.ucsc.edu> haynes@ucscc.UCSC.EDU (Jim Haynes) writes:
$>Well there are different meanings to "sophisticated", but for real
$>sophistry you need to see a system that uses a lot of letter
$>abbreviations like JES and MVS and VS1 and VM and CMS and OS and ...
$>where each flavor needs a different compiler and file system.
$
$the ONLY currently available versions are MVS(/SP) and MVS/XA.

$VM is a completely different operating system consisting of
$VM, the hypervisor (CP-44 -> CP-67 -> CP) and various
$guest operating systems, of which CMS is one.   MVS can run under CP.   

Fact remains that VM/CMS and MVS use different file systems, different
compilers, different application software, different everything just
about. How sophisticated can you get, huh?

I get the impression from looking at the IBM manuals that IBM is
somewhat embarrassed by the success of VM/CMS, and is trying to slowly
phase it out or merge it with MVS... BLETCH!  IBM already has two
top-heavy, poorly-designed OS's, and wants to make an even more
top-heavy poorly-designed OS? Now I know the heritage of OS/2
(half-OS) :-(. 

Hard to believe that IBM can have an operating system ten times more
complex than Unix V7, with 1/10th the power and elegance. But then
again, nobody ever bought IBM for their technical prowness (although
their 3090 certainly has some neat hardware).

--
Eric Lee Green    ..!{ames,decwrl,mit-eddie,osu-cis}!killer!elg
          Snail Mail P.O. Box 92191 Lafayette, LA 70509              
"Is a dream a lie if it don't come true, or is it something worse?"

dlsc1032@dlscg1.UUCP (Alan Beal) (06/15/88)

In article <3147@polyslo.UUCP>, dorourke@polyslo.UUCP (David O'Rourke) writes:
>   At last count system 3.7 was somewhere in the neighborhood of 700-800
> thousand lines, the only reason Unisys was forced to implement libraries is
> because the MCP was getting so big the compilier couldn't treat it as one
> single program anymore.

  I can't believe I am about to defend Unisys, but I would say that the 
number of lines of code in the MCP has nothing to do with the implementation
of libraries.  The DMSII accessroutines were one of the first versions of
libraries even though it wasn't called a library and DMSII has been around
since the 70's.  I can not speak of Unisys's intent, but libraries offer the
modularity desired in large complex systems.  Take COMS for example, the
majority of COMS code is implemented as libraries, as well as BNA and the
new print subsystem.  Libraries eliminate the need for binding all those
code files together - I would call this a software engineering enhancement not
a solution to the number of lines in the MCP.  If you take a good look at
libraries, aren't they implemented in a manner similar to those in OS/2?  My
only complaint is that Unisys has not put a lot of effort in developing new
products using the newest features in the MCP, ie. libraries and port files.

>   And triing to keep up with all of the different versions of Algol that
> Unisys has: Newp, DC-Algol, ect..   No wonder no software gets written when
> ever you want to change something you have to work across at least three
> different versions of the same language, several different versions of the
> MCP, and you have that wonderful editor with which to look at all of this
> code.

   Here I go again defending Unisys. )-:   I am not sure that you understand
how the different versions of Algol are used.  Algol, DCalgol, DMalgol, and
BDMSalgol are all compiled from a single source - symbol/algol.  Here are 
their capabilities:

   1) Algol - normal application development capabilities
   2) BDMSalgol - Algol plus DMSII capabilities
   3) DCalgol - Algol plus data communications and system programming
		capabilities.  No DMSII capabilities.
   4) DMalgol - DCalgol plus DMS accessing and development capabilities

   Which version of algol is used is determined by the type of programming
involved and I never seem to get confused on which version to use.

   Where are all these versions of the MCP?  As far as I know we can only
purchase one version for the B7800.  Again you are confusing the fact that
each machine series has a tailored MCP for that particular machine in order
to handle the way memory is managed and other gory details, but basicly 
the functionality is the same between MCPs.

>   Even Unisys is moving towards knowing Machine code, they have a new 
> piece of software called DumpAnalyser, they seemed to feel the need to spend
> three weeks teaching me how to use it {they do this for all new employee's}.
> And if you think what it puts out is Algol code you are sadly mistaken, it's
> basicly for "reading" the stack of a program, and if that's not machine code
> I don't know what is, they still don't have an assembler, but they're now
> allowing the programmers to "look" at the code produced by the compiler.

   How many application programmers out there have used Dump Analyzer?  How
many know what it is?  Dump Analyzer is a tool for systems programmers to
analyze memory dumps, ie. what programs where in the mix at the time of
the dump and where did they bomb off.  Our staff spends a lot of time using
this tool and at times have looked at the machine code of the offending
program, but for the most part the machine code offers little insight into
the cause of the problem and usually the problem is passed on to Unisys to
solve.  I would agree that knowledge of the stack architecture is very
helpful, especially in debugging programs.  However, most people can debug
their programs without ever looking at machine code.  Finally, the stack
is not machine code but an internal data structure for storing variables,
pointers to data, and recording the environment of the program.  It would
be a mistake to say Unisys is moving towards machine code and assemblers
since the move from within is to get the Sperry side out of that mode of
operation.


Software is the name of the game for most companies now.  IBM realizes this.
Unisys does not.  The Burroughs side of Unisys has concentrated on further
enhancing its current software and has not made great efforts at developing
new products.  For example, LINC was developed by a company in New Zealand
and was purchased by Burroughs as a 4GL.  How many people like to use LINC?
It has nice syntax like 'MOVE ; FIELD1 FIELD2'.  Would you call this an
end-user or programming language?  Then there was GATEWAY developed by
Joseph and Cogan which was competing with COMS.  Burroughs bought J & C and
now we have COMS.  What happened to GATEWAY?  And now we have SIM, a semantic
database system sitting on top of DMSII.  Does it offer SQL a or provide access
to other DBMS systems like DB2?  No, of course it doesn't.  And how about
BNA - it is a nice way to connect Burroughs machines together but can I connect
our UNIX machine to it?  Again, no.  And speaking of BNA, it would be an
excellent vehicle in which to develop a distributed DBMS around.  Are there
any plans to do this in the future?  You know the answer.

-- 
Alan Beal   DLSC-ZBC                 Autovon    932-4160
Defense Logistics Services Center    Commercial (616)961-4160
Battle Creek, MI 49015               FTS        552-4160
UUCP:  {uunet!gould,cbosgd!osu-cis}!dsacg1!dlscg2!abeal

dricej@drilex.UUCP (Craig Jackson) (06/16/88)

In article <3693@saturn.ucsc.edu> haynes@ucscc.UCSC.EDU (Jim Haynes) writes:
>>people wanted that to become a variable sector size, and that is what is
>>going to take them so long to implement.  Not exactly modualar software.
>
>Gee, seems like only yesterday that IBM had just discovered the
>merits of fixed sector size, after Burroughs and DEC had been using
>it all these years.  All the times that users have had to completely
>re-format their files after some new model IBM disk replaces an
>older one...  But maybe you're talking about something different.

I suspect that what the author was talking about was being able to
configure the sector size of a disk.  Sort of like setting the blocksize
of file system.  The present A-series MCP must have its disk formatted into
180-byte sectors; not 360, not 720.  This information is known all over the
place, including any well-written application.  (One wants one's blocks to
be an integral number of sectors long, for example.)

There was an interesting comparision between VM, MVS, and MCP which came
up when we were doing a procurement last year.  VM wastes disk because it
is pre-allocated to each user, and the free space cannot be shared.  MVS
wastes disk because nearly all datasets are pre-allocated, period.  MCP
wastes disk because of the ridiculously small sector size.  It came out
a wash.

Of course, Unix doesn't waste much disk, but takes *forever* to get to
it (in mainframe terms), due to the block-level allocation and all the
indirection.

>>  Algol 68 does support Data structures.  And most smart companies have

>>upgraded their systems to Algol 68 or beyond.  Why Unisys hasn't, I don't know.
>
>Errr, which companies do you have in mind?  I don't pretend to be a
>walking catalog, but I haven't heard anyone mention Algol68 in the last
>15 years or so, much less try to sell me a compiler for it.

The original author was way off-base on this one.  Algol 60 and Algol 68
share little except for a name.  Algol 68 is certainly not a simple upgrade
from 60--it's a completely different language.

>>  Even Unisys is moving towards knowing Machine code, they have a new 
>>piece of software called DumpAnalyser, they seemed to feel the need to spend
>>three weeks teaching me how to use it {they do this for all new employee's}.
>>And if you think what it puts out is Algol code you are sadly mistaken, it's
>>basicly for "reading" the stack of a program, and if that's not machine code
>
>Gee, back in B5500 days we had a dump analyzer that would print out the
>stack and various MCP tables so you could almost read it.  In contrast
>to certain other machines of the same period that would just give you
>pages and pages of pure hexadecimal in neat columns, but you had to
>figure out yourself where the structures were.  Or buy a third-party
>dump formatter for lots of money.  But that was just for use with
>MCP dumps.  For user programs you never needed dumps because the
>abort message told you exactly where to look in the program listing
>and gave you the reason for the abort in English.  I guess things have
>gone downhill since then.

The Dump Analyzer printed the object code because frequently during
systems debugging you need to know exactly which part of an expression
caused the problem.  This might be more important on the A-series than
on simple linear-address machines, because the hardware concepts were
so close to the applications language.

This portion of the architecture actually makes certain forms of 
debugging very nice.  The user-level dump facility on the A-series
is the nicest I've ever seen.  (On the latest release, they even
add symbolic variable-name information.  Sure, many debuggers have
offered this in the past, but how many dumps?)  What makes this possible
are the very things which the RISC people dislike: applications complexity
built into the hardware.  The tagged architecture allows the dump to
automatically know about data types; the descriptor-based architecture 
allows the dump to know more about the size and nature of each array; and
the stack-based architecture, with hardware support for block-structure,
makes call histories and variable addressing very easy.

(The above paragraph is my concession to architectural discussion.)

You may believe that good debuggers make dumps unnecessary, but I've found
that most bugs significant bugs occur in a production environment, and
the important thing to do in that environment is to capture information
(a dump, or a core file) and get the user back on the air somehow as soon
as possible.

>haynes@ucscc.ucsc.edu


-- 
Craig Jackson
UUCP: {harvard!axiom,linus!axiom,ll-xn}!drilex!dricej
BIX:  cjackson

dorourke@polyslo.UUCP (David O'Rourke) (06/19/88)

In article <372@dlscg1.UUCP> dlsc1032@dlscg1.UUCP (Alan Beal) writes:
>  I can't believe I am about to defend Unisys, but I would say that the 
>number of lines of code in the MCP has nothing to do with the implementation
>of libraries.

  Have you ever worked for Unisys?  No I don't think so.  And yes the current
implementation of Libraries came about as a request from the MCP group {which
I worked with at the Mission Viejo/Lake Forest plant} to the Compiler group
to implement Libs. because the MCP was getting too large.  Yes Unisys has
always had Libs. but not until recently were they used to a great extent
in internal/external production code, no one trusted them and would put the
code in-line rather than porting it out to a lib, most of the time the
offending programmer would claim performance.  Unisys's library implmentation
is quite elegant and doesn't impose a performance hit after the 1st use.
Many programmers that have been with Unisys for MANY years don't see the
benifit of using libs. and you have to pull teeth to get them to use any of
the standard routines that are provided if they can write them themselves.
One programmer designated to teach me how to program A-Series told me not to
make MCP calls because of the high overhead, he said always do the simple
stuff so that your program runs faster.  Well if making MCP calls has such
a high overhead and most programmers don't call them then what's the point
of having the MCP allow external calls?  Many programmers were so worried about
the performance of their code that they would forsake compatibility {i.e.
doing it themselves rather than the MCP which would allow future compatibility}.
And if you think this is an isolated attitude you are wrong!

>modularity desired in large complex systems.  Take COMS for example, the
>majority of COMS code is implemented as libraries, as well as BNA and the
>new print subsystem.  Libraries eliminate the need for binding all those
>code files together - I would call this a software engineering enhancement not
>a solution to the number of lines in the MCP.

  Yes but a great majority of programmers at Unisys don't see the benifit of
this enhancement.  And as for COMS being modular that's one of the reasons
libaries came about.  Because COMS got to be sooooo large they couldn't fit
it one program so they too requested that libs. be implemented.  And as far
as COMS being good code well you can chuck that idea the slightest change in
the MCP normally breaks COMS, if you want to test your MPC patch test it with
COMS and see if it still works.  Why is this you say, well because some
programmer did something himself rather than going thru the standard libary
call.  Yes libs are implemented, but they are used as ways to break code into
smaller chunks for the compiler.  They are rarely used to provide a standard
interface or provide future compatibility.  Most programmers at Unisys that
I met simply didn't bother to make calls to other libraries if they could write
the code them selves.  This causes lots of compatibility problems for future
upgrades and nullifies one of the benifits of libs.
  And COMS is an interesting beast in itself.  I spent 4 months going over that
code and talking to anyone I could find regarding information on COMS.  The
mission plant has over 400 programmers working there and I talk to at least
100 of them.  Asking a simple question:  What does COMS do, and what does
MCP do!  No-one to this day answered that question with a straght forward
answer.  MCP and COMS are so intertwined that no one can tell them apart, there
are many places in COMS that has identical subroutines as the MCP because the
original programmer didn't bother to look and see if the MCP did it already,
what this means is that when that part of MCP is changed someone else has to
go in a change the same routine in COMS.  In fact a special flag in patch
manager was implemented to flag changes to either of the pieces of code and
notify the appropriate department that they need to change their code.  Yep
Unisys has libraries alright, now could someone teach them how to use them.

>If you take a good look at
>libraries, aren't they implemented in a manner similar to those in OS/2?  My

  Are you comparing A-Series to OS/2.  Please the A-Series deserves better
than that.  OS/2 1/2 an operating system   :-)

>only complaint is that Unisys has not put a lot of effort in developing new
>products using the newest features in the MCP, ie. libraries and port files.

  Have you ever watched a system using port files?  Not real fast!  Many of
the newest features of the MCP aren't understood by the vast majority of
the programmers at Unisys, hence you don't get to see a lot of software that
uses them.

>>   And triing to keep up with all of the different versions of Algol that
>> Unisys has: Newp, DC-Algol, ect..   No wonder no software gets written when
>> ever you want to change something you have to work across at least three
>> different versions of the same language, several different versions of the
>> MCP, and you have that wonderful editor with which to look at all of this
>> code.
>
>   Here I go again defending Unisys. )-:   I am not sure that you understand
>how the different versions of Algol are used.  Algol, DCalgol, DMalgol, and
>BDMSalgol are all compiled from a single source - symbol/algol.  Here are 
>their capabilities:

  I am quite aware of the different versions of Algols.  If they are so 
similar then why does Unisys have 600-1000 {or more} page manual describing
just the features of that language for each version.  Each manual for each of
the algols refer to the Standard Algol 60 manual which is the lowest common
denominator at Unisys, it then goes on to talk about the "differences" between
the "Standard" & "This version" of Algol.  These manuals are LARGE, Technical,
and very scant in their descriptions of the various functions.  They are
not the same, and many have major differences and feature specific syntax
that aren't availible in the other algols.  You should spend 2 or 3 weeks
going between  Algol ---> NeWP ----> DCAlgol and back and forth to find an
obscure bug in one of the routines in the MCP.  This code wasn't standard
Algol and each part typically used the specialized features of that particular
language.  Go off and do that and then come back and tell me that all of these
languages are the same and that if you know one you know them all, yeah thats
true for the simple stuff, but not for the sort of stuff that Unisys typically
writes.

>   Where are all these versions of the MCP?  As far as I know we can only
>purchase one version for the B7800.  Again you are confusing the fact that

  If you will read the availible posting you will find that I worked for the
A-Series group of people.  They have released MPC 3.7 for the entire A-Series
line of computers.  It is an Upgrade from 3.6 it is not machine specific except
that it has to be run on an A-Series.  When I left to finish school they were
already coding MCP 3.8 and planning MCP 3.9.  Again I'm confusing nothing!
You just don't understand.

>be a mistake to say Unisys is moving towards machine code and assemblers
>since the move from within is to get the Sperry side out of that mode of
>operation.

  How would you know what the move from within is?  And the version of 
DumpAnalyser that I used did indeed allow you to look at both the stack and
the machine code that the different descriptors in the stack pointed to. 

>BNA - it is a nice way to connect Burroughs machines together but can I connect
>our UNIX machine to it?  Again, no.  And speaking of BNA, it would be an
>excellent vehicle in which to develop a distributed DBMS around.  Are there
>any plans to do this in the future?  You know the answer.

  Ahh but when I asked my manager when we were going to implement a distributed
filing system he said: "We already have a distributed file system"  Well in
fact they don't.  The people at Unisys will tell you that they already have
a distributed DBMS, and they think they do, when in fact they are sadly
mistaken.  The whole purpose of my original article was to indicate that 
although the E-Mode architecture is quite nice the software that Unisys is
running on top of it is quite old and wasn't very sophisticated when it was
new.
  Unisys needs to make some radical changes if they are going to continue to
compete.  There are people inside that are triing, but they have 20 years of
inertia to fight, so we'll see what happens.  Stay tuned.

-- 
David M. O'Rourke

Disclaimer: I don't represent the school.  All opinions are mine!

dorourke@polyslo.UUCP (David O'Rourke) (06/23/88)

In article <374@dlscg1.UUCP> dlsc1032@dlscg1.UUCP (Alan Beal) writes:
>  While we are on the subject, how does Unisys compare to other vendors in the
>number of patches applied to its software releases?  Currently we are release

  I was personally dismayed at the instability of the MCP.  For a 20 year old
OS it really isn't all that bullet proof.  We had production machines that
would halt-load every day just to clean them selves up.  I don't know of
too many other OS's that get so messed up that they have to re-boot to clean
everything out.
  Judging from this I'd say Unisys A-Series MCP tends to have more bug fixes
that other equivilant OS's.  BUT I DON'T REALLY KNOW, THIS IS OPINION BASED
ON MY EXPERIENCE.

>3.6 and are up to the 9th patch cycle.  The problem with most of the patches
>is that they usually cause more problems than they fix; most of the time it
>seems that no one must have tested the patches.  It is such a problem that
>the so called Unisys experts never seem to be able to remember what things
>were changed at what release.  As standard operating procedure we always
>extensively test any new patch release before permanently installing it on
>the system.  And it always seems that we are returning back to earlier, less
>patched versions of the same MCP release.  It can become a nightmare.

  The major problem seems to be that by the time the bugs are found the 
programmers: A) Quit due to frustration with Unisys,  B) Moved to another
section, C) Moved to another project.   After the MCP group turns it's 
software in to Product Assurance they don't see it again for about 3 - 6
months, by that time they've moved onto something else.  Normally the
person fixing the bug isn't the one who wrote it.  This also demonstrates
the problem with the Mono-Lithic structure of MCP, a change in one area
of the program might have side effects in other areas.  Another problems
stems from what I mentioned in an early article, no body reuses code!!
Everyone writes it themselves, rather than triing to make central calls.
Well if a problem with one part of the Code in MCP then there is a VERY
VERY VERY high probability that there is some similar code in the MCP
somewhere that also needs to be fixed, but typically this isn't found until
after the release.  It seems to be a problem with the Software engineering
aspect of MCP rather than bad programming.  The arch. of MCP is forced on
newer programmers even though they know better, and it just continues to
grow unchecked.
   An internal estimate from a couple of people in my group that I used
to work for estimated that if you were to re-implement MCP, not add one
single feature, just freeze it and rewrite it from scratch with today's
software techniques then they could probably get it down to about 300-400
thousand lines, Almost a 50% code reduction, and it would be MUCH MUCH more
extensible and easier to maintain, the problem is they estimated the project
to take 50-60 man years.  And Management didn't seem to like the idea of
spending all of those resources just get what the "already" have.  I
personally think it needs to be done or else the monster of MCP is going
to eat the A-Series people Alive.  Things are already grinding to a slow
crawl when any new feature is added to MCP, this will soon become almost
inifinate in the amount of resources required to make even the slightest
change.

   Thankyou for your comments.  This has been an interesting discussion.
I've learned a lot and I hope some people have found my comments useful.
I'm willing to continue it, but I've also been requested to move it to
another group.  Perhaps someone could recommend a group if they want
to continue this discussion.

-- 
David M. O'Rourke

Disclaimer: I don't represent the school.  All opinions are mine!