[comp.arch] Computers for users not programmers

conor@lion.inmos.co.uk (Conor O'Neill) (01/24/91)

In article <1991Jan23.003505.21615@dsd.es.com> rthomson@dsd.es.com (Rich Thomson) writes:
>In article <6388@exodus.Eng.Sun.COM> work@dgh.Eng.Sun.COM (David G. Hough at work) writes:
>>The comp.arch problem for the 1990's is how to provide
>>a way for consumers to benefit from the power of Unix without actually
>>knowing anything about it, and without diminishing the availability or
>>the underlying power for the craftsmen.

I agreee.

>I think the world likes the desktop metaphor, but Unix won't be really
>taking off until something better (and more aligned with Unix's
>capabilities and strengths) comes along.
>
>For instance, 
>
>o   How do I create a symbolic link in the icon-based world?
>o   How do I connect multiple programs through pipes in the icon
>    world?

A 'user' doesn't know what a symbolic link is, and doesn't want to know.
Ditto pipes.

>What is needed to make unix easy is a visualization of the interaction of
>system programs, the file system, and shell commands.

A 'user' also doesn't want to know what system programs, file systems, and
shell commands are.

A 'user' just wants to run applications.
---
Conor O'Neill, Software Group, INMOS Ltd., UK.
UK: conor@inmos.co.uk		US: conor@inmos.com
"It's state-of-the-art" "But it doesn't work!" "That is the state-of-the-art".

ssr@fourier.Princeton.EDU (Steve S. Roy) (01/25/91)

>>o   How do I connect multiple programs through pipes in the icon
>>    world?
>
>A 'user' doesn't know what a symbolic link is, and doesn't want to know.
>Ditto pipes.

Actually, there are cases where the 'user' wants to do that sort of
thing.  There are a couple of scientific visualization tools around
now (apE, AVS) that let you connect "computation elements" thru
"pipes" to perform a calculation and display the result.  The
computation elements are intended to do one thing and do it well, and
then pass the results on to the next.

Each of these systems comes with a tool for setting up a network
graphicly, drawing an icon for each tool and lines for each pipe.  It
is very intuitive, if a bit rough in these first versions.

This setup has the extra advantage that there can be more than one
input and more than one output, and you can have loops and more
general networks.

Steve Roy

andy@research.canon.oz.au (Andy Newman) (01/25/91)

In article <13985@ganymede.inmos.co.uk> conor@inmos.co.uk (Conor O'Neill) writes:
>A 'user' just wants to run applications.

Aren't you a computer user?

Give (non-programmer) ``users'' some credit ... a user who understands that
they can construct their own applications by plugging together some tools
in the correct order would want some easy mechanism to construct pipelines
(and pipelines aren't the only model).




-- 
Andrew Newman, Software Engineer.            | Net:   andy@research.canon.oz.au
Canon Information Systems Research Australia | Phone: +61 2 805 2914
P.O. Box 313 North Ryde, NSW, Australia 2113 | Fax:   +61 2 805 2929

hrubin@pop.stat.purdue.edu (Herman Rubin) (01/25/91)

In article <1991Jan24.222501.7054@research.canon.oz.au>, andy@research.canon.oz.au (Andy Newman) writes:
> In article <13985@ganymede.inmos.co.uk> conor@inmos.co.uk (Conor O'Neill) writes:
> >A 'user' just wants to run applications.
> 
> Aren't you a computer user?
> 
> Give (non-programmer) ``users'' some credit ... a user who understands that
> they can construct their own applications by plugging together some tools
> in the correct order would want some easy mechanism to construct pipelines
> (and pipelines aren't the only model).

There are, unfortunately, some who want the software to do all their thinking
for them.  It is only those who can be called non-programmers.  Anyone who has
to put things together is already doing programming.

As andy points out, even a pipeline is a program.  One of the problems I have
with using "packages" is that they do not allow the combining of tools.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

muyanja@hpdmd48.boi.hp.com (bill muyanja) (01/27/91)

>o   How do I connect multiple programs through pipes in the icon
>    world?

Pipes are just one form of interprocess communication.  Various GUIs (mostly 
on PCs) are already evolving sophisticated IPC protocols that preserve
the software tools approach of Unix in an icon world.  In fact, these
IPCs, with their so-called "hot-link" metaphor, may be one solution
to the problem of distributed computing in a heterogeneous environment
without resorting to a standard Application Binary Interface across
machine architectures.

I suspect that it would be trivial, for instance, to establish communication
between an X-client on a RISC box and MS-Excel under Windows using the DDE
protocol, although I don't have the resources to try it myself.

- bill

  "Standard disclaimers apply ..."

klaus@cnix.uucp (klaus u schallhorn) (01/28/91)

In article <4488@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>In article <1991Jan24.222501.7054@research.canon.oz.au>, andy@research.canon.oz.au (Andy Newman) writes:
>> In article <13985@ganymede.inmos.co.uk> conor@inmos.co.uk (Conor O'Neill) writes:
>> >A 'user' just wants to run applications.
>> 
>> Aren't you a computer user?
>> 
>> Give (non-programmer) ``users'' some credit ... a user who understands that
>> they can construct their own applications by plugging together some tools
>> in the correct order would want some easy mechanism to construct pipelines
>> (and pipelines aren't the only model).
>
>There are, unfortunately, some who want the software to do all their thinking
>for them. 
>
And they drive their cars the same way. They probably consume "the sun",
or "the national enquirer" for breakfast. Give 'em a solar calculator,
that'll keep 'em puzzled.

klaus

>--
>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
>Phone: (317)494-6054
>hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)


-- 
George Orwell was an Optimist

magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) (01/28/91)

In article <4488@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>In article <1991Jan24.222501.7054@research.canon.oz.au>, andy@research.canon.oz.au (Andy Newman) writes:
>> Give (non-programmer) ``users'' some credit ... a user who understands that
>> they can construct their own applications by plugging together some tools
>> in the correct order would want some easy mechanism to construct pipelines
>> (and pipelines aren't the only model).
>
>There are, unfortunately, some who want the software to do all their thinking
>for them.  It is only those who can be called non-programmers.  Anyone who has
>to put things together is already doing programming.

There will always be some people who (for various reasons) refuse to learn
*anything* about the systems they're using (not only computers). There'll
always be the kind of computer "user" who doesn't know what to do when their
computer asks them to "press any key".

However, the majority of users *are* prepared to learn how to use for example
a computer program *provided it's not too difficult*. The reason most users
don't know how to put applications together with pipes is perhaps that it's
perceived as too difficult? Maybe they *would* use pipes if their computer
presented them with an intuitively clear user interface? Like, many PC users
don't even know how to delete files - they've never bothered to learn, because
it's too difficult to remember all those cryptic three-letter commands in
MS-DOS. However, I've never met a single Macintosh user who didn't know that
you delete a file by dragging the icon to the trashcan.

One argument presented against a simple model for pipes (like the trashcan
model for file deletion) is that "users don't want to use pipes anyway".
I believe that if we make it easy enough fo them to use pipes, then they
wil use them! Herman Rubin means that a user who knows how to use pipes isn't
a user, he/she's a programmer. That's of course a matter of definition - but in
that case, let's give the users an opportunity to become "programmers" (in Mr.
Rubin's sense) without having to read thick books about Unix.

Users aren't stupid (at least, not most of them). However, most of them lack
the energy and motivation to think like programmers. Instead of forcing people
to adapt to computers, wouldn't it be much nicer if we adapted computers to 
people? 


Magnus Olsson                   | \e+      /_
Dept. of Theoretical Physics    |  \  Z   / q
University of Lund, Sweden      |   >----<           
Internet: magnus@thep.lu.se     |  /      \===== g
Bitnet: THEPMO@SELDC52          | /e-      \q

jlg@lanl.gov (Jim Giles) (01/29/91)

From article <1991Jan28.112723.15274@lth.se>, by magnus%thep.lu.se@Urd.lth.se (Magnus Olsson):
> [...]
> Users aren't stupid (at least, not most of them). However, most of them lack
> the energy and motivation to think like programmers.  [...]

It has nothing to do with motivation or intelligence.  It's just a matter
of cost-effectiveness.  If it takes longer to learn to do something than
the value of being able to do it - it's _more_ intelligent _not_ to
learn it.

> [...]                                                Instead of forcing people
> to adapt to computers, wouldn't it be much nicer if we adapted computers to 
> people? 

Exactly!!

J. Giles

magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) (01/29/91)

In article <12830@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
>From article <1991Jan28.112723.15274@lth.se>, by magnus%thep.lu.se@Urd.lth.se (Magnus Olsson):
>> [...]
>> Users aren't stupid (at least, not most of them). However, most of them lack
>> the energy and motivation to think like programmers.  [...]
>
>It has nothing to do with motivation or intelligence.  It's just a matter
>of cost-effectiveness.  If it takes longer to learn to do something than
>the value of being able to do it - it's _more_ intelligent _not_ to
>learn it.

Well, that's part of what I meant by `motivation' - if you've come to the
conclusion that something isn't worth learning, you're not very motivated,
are you?

Magnus Olsson                   | \e+      /_
Dept. of Theoretical Physics    |  \  Z   / q
University of Lund, Sweden      |   >----<           
Internet: magnus@thep.lu.se     |  /      \===== g
Bitnet: THEPMO@SELDC52          | /e-      \q

hrubin@pop.stat.purdue.edu (Herman Rubin) (01/29/91)

In article <12830@lanl.gov>, jlg@lanl.gov (Jim Giles) writes:
> From article <1991Jan28.112723.15274@lth.se>, by magnus%thep.lu.se@Urd.lth.se (Magnus Olsson):
> > [...]
> > Users aren't stupid (at least, not most of them). However, most of them lack
> > the energy and motivation to think like programmers.  [...]
> 
> It has nothing to do with motivation or intelligence.  It's just a matter
> of cost-effectiveness.  If it takes longer to learn to do something than
> the value of being able to do it - it's _more_ intelligent _not_ to
> learn it.
> 
> > [...]                                                Instead of forcing people
> > to adapt to computers, wouldn't it be much nicer if we adapted computers to 
> > people? 
> 
> Exactly!!
> 
> J. Giles

This is exactly wrong.  It assumes that the computer can be programmed to do
exactly what the user wants.  In my experience, this is far from the case.

What does happen is that the user is taught that what the guru has put in
the language, system, etc., is what the computer is capable of.  The user
is deliberately kept from even finding out the capabilities of the hardware,
and then the hardware is built in such a way as to make these capabilities
difficult and expensive.

To adapt computers to people, we have to let the people who can understand
what hardware can be capable of to get into the act.  One of the recent
"developments" is to separate integer and floating-point operations.  Now
that some people cannot see the need for not doing this does not mean it
should not be done, nor that it should be made expensive.  High accuracy
arithmetic can be done in floating point, but it is a real mess; usually
one makes the floating point emulate fixed point.

The same applies to packages.  They rarely do what is really needed, or they
do it clumsily.

BC (before computers), it was necessary for the user to put together what was
wanted.  This is programming.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

chip@tct.uucp (Chip Salzenberg) (01/29/91)

According to jlg@lanl.gov (Jim Giles):
>If it takes longer to learn to do something than the value of being
>able to do it - it's _more_ intelligent _not_ to learn it.

The problem is making that decision intelligently.  Most users -- most
programmers, for that matter -- have no way to predict accurately the
future value of any given skill.

Users who learn willingly are more likely to pick up something useful
(to them) than those who must have the value of each skill proven to
them before they'll deign to learn it.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
   "All this is conjecture of course, since I *only* post in the nude.
    Nothing comes between me and my t.b.  Nothing."   -- Bill Coderre

magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) (01/29/91)

In article <4724@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>In article <12830@lanl.gov>, jlg@lanl.gov (Jim Giles) writes:
>> From article <1991Jan28.112723.15274@lth.se>, by magnus%thep.lu.se@Urd.lth.se (Magnus Olsson):
>> > [...]                                                Instead of forcing people
>> > to adapt to computers, wouldn't it be much nicer if we adapted computers to 
>> > people? 
>> 
>> Exactly!!
>> 
>> J. Giles
>
>This is exactly wrong.  It assumes that the computer can be programmed to do
>exactly what the user wants.  In my experience, this is far from the case.
[rest of article deleted to save bandwidth]

I'm not quite sure what you mean, or why you think it is wrong to adapt computers
to people, but I think you might have misunderstood me.

I do *not* subscribe to the doctrine that you should make computers "user 
friendly" by "castrating" them (amputating features). "Make a system even a fool
can use, and only a fool will want to use it". Alas, lots of software produced
today seems to follow that principle. 

What I would like to see is systems that are intuitively easy to use, while still
being as powerful as hairy, command-driven systems. For example, Unix has a very
steep learning curve, but once you're over the threshold, it allows you to work
a lot more effectively than with for example a Macintosh. However, if you give 
a 'typical user' the choice between a Mac and a Unix workstation, he/she will in
all probability choose the Mac. If forced to use Unix, he/she will never learn
more than the most common commands (ls, cat, rm, mv and the command to start the
word processor) and then complain about that "you can't do anything on this 
machine" (I've seen this happen). If someone could find an intuitive, graphic
way of representing Unix's command structure without sacrificing any of the power, 
IMHO that would be a Good Thing. Even if it meant sacrificing some of the power,
it would probably mean that the user could work more efficiently than before.

Let's face it, Mr. Rubin: You may be a Real Programmer (or whatever you choose
to call yourself), and I may be a hacker. We may be willing to invest some time
in learning to utilize computers optimally (in your case, it seems you're really
making them jump through hoops). But most users are decidedly *not* willing to 
do that. They want to use their computers to do some specific task, as simply
as possible. If we can make it simpler for them to do more than that specific
task, we may find that they actually do learn as they go. Many total 'computer
illiterates' have become power users thanks to the simple interface of the Mac -
when they found out that they actually could understand what was going on, they
got interested and wanted to learn more. Compare this to the common reaction to
MS-DOS - "No, I don't want to know how to delete files (or whatever) - it's
sufficiently complicated as it is" - an attitude I've encountered only too often.

However, if we take the elitist attitude that "I'm not going to adapt my computer
to any stupid users - if they want to use it, they'd better learn to use it 
properly", we'll continue to see the great mass of users alienated from what could
perhaps have been their most important tool, had they been able to utilize to
a greater extent.

Of course, I'd like as much as you would to see a world where *everybody* could
squeeze the last ounce of performance out of their computers. However, for
the foreseeable future, that must remain a utopic vision. What we *can* do is
making it simpler for users to use at least a part of their computers' full
potential. That does *not* have to mean that we make it impossible for 
knowledgeable people to use their knowledge as before - just as having a nice,
integrated, user-friendly programming environment doesn't mean that you have
to forbid the use of hand-optimized machine code.

Magnus Olsson                   | \e+      /_
Dept. of Theoretical Physics    |  \  Z   / q
University of Lund, Sweden      |   >----<           
Internet: magnus@thep.lu.se     |  /      \===== g
Bitnet: THEPMO@SELDC52          | /e-      \q

jlg@lanl.gov (Jim Giles) (01/30/91)

From article <4724@mentor.cc.purdue.edu>, by hrubin@pop.stat.purdue.edu (Herman Rubin):
> [...]
> What does happen is that the user is taught that what the guru has put in
> the language, system, etc., is what the computer is capable of.  The user
> is deliberately kept from even finding out the capabilities of the hardware,
> and then the hardware is built in such a way as to make these capabilities
> difficult and expensive.

You have just vehemently agreed with me!  Learning about the hardware
and figuring out how to make better use of it is _one_ of the things
that _is_ worth learning about.  It's all those poorly designed and
fairly week UNIX 'tools' that are hard to learn, hard to use, and don't
do much that's worthwhile that I object to - I don't want to be stuck
using what some 'guru' tells me to use either.

Simple, easy to use, access to the guts of the machine is a worthy
goal for OS design.  I agree wholeheartedly with you on this.  It's
only one of the worthy goals though.  Some people already have excess
capacity on their machines and don't need to push performance any
more.  These people have different goals than you do.  Strangely,
one of the few thing that you have in common are that the UNIX style
of tools hold you both back (for different reasons).

So, the point about designing systems for people instead of training
people for systems is this: systems should be designed to make it
easy to perform those tasks which people want to do.  You want to
engage in 'full contact programming' - so the system be designed to
allow you to do that.  It should _NOT_ be designed to _require_
everyone else to use the machine the way you want to.  Similarly,
other users have different needs and the way they work should not
be forced upon you.  This brings us to the UNIX tools/pipes/shells
crowd: who want to force _their_ way of woring onto _everybody_.

J. Giles

phil@brahms.amd.com (Phil Ngai) (01/30/91)

In article <1991Jan29.150122.4454@lth.se> magnus@thep.lu.se (Magnus Olsson) writes:
|However, if you give 
|a 'typical user' the choice between a Mac and a Unix workstation, he/she will in
|all probability choose the Mac. If forced to use Unix, he/she will never learn
|more than the most common commands (ls, cat, rm, mv and the command to start the
|word processor) and then complain about that "you can't do anything on this 
|machine" (I've seen this happen).

You mean like vi and troff? I wish I knew what to say about people who
think they are good enough.

--
When someone drinks and drives and hurts someone, the abuser is blamed.
When someone drinks and handles a gun and hurts someone,
the media calls for a gun ban.

magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) (01/30/91)

In article <1956@cluster.cs.su.oz.au> yar@cluster.cs.su.oz (Ray Loyzaga) writes:

>Troff is over 20 years old, it is usually good enough if you know
>your stuff, but it is not state of the art, it is however extremely
>flexible and cheap on most Unix machines. It also can be enhanced
>easily by the "tools approach", something that the packaged text
>processors lack.
>All the packages cost too much to compete with troff for my purposes,


What about TeX? It's free.

Magnus Olsson                   | \e+      /_
Dept. of Theoretical Physics    |  \  Z   / q
University of Lund, Sweden      |   >----<           
Internet: magnus@thep.lu.se     |  /      \===== g
Bitnet: THEPMO@SELDC52          | /e-      \q

magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) (01/30/91)

In article <1991Jan30.014945.22840@amd.com> phil@brahms.amd.com (Phil Ngai) writes:
In article <1991Jan29.150122.4454@lth.se> magnus@thep.lu.se (Magnus Olsson) writes:
|However, if you give 
|a 'typical user' the choice between a Mac and a Unix workstation, he/she will in
|all probability choose the Mac. If forced to use Unix, he/she will never learn
|more than the most common commands (ls, cat, rm, mv and the command to start the
|word processor) and then complain about that "you can't do anything on this 
|machine" (I've seen this happen).

You mean like vi and troff? I wish I knew what to say about people who
think they are good enough.

No, of course I meant emacs and TeX :-) [Ha ha, only serious]

Actually, I had two categories of users in mind:

a) The "secretary" type, who knows how to use the word processor (yes, there
   *are* good, user-friendly WPs under Unix), but who *never* seems able to
   learn more than the most elementary file-handling commands - lpr, ls, 
   rm and mv, and has with great difficulty in remembering them?

b) The users with som knowledge of programming (let's not be picky about whether
   these are programmers or users - they act as users, anyway), who use one 
   terminal window on their workstation and the simplest possible editor (like
   dxnotepad under DECwindows) to write, compile and link small Fortran
   programs, and learn to do this and to send and receive mail, but never much
   more. 
   
This last category is especially likely to complain about their not being able
to do anything on the system, and to conclude that Unix is utterly unusable. 



One good example of what I meant with my comparison of Unix and Macs is the 
following: 

The concept of `folders' on the Mac is identical to what is called 
subdirectories under Unix, MS-DOS, VMS and so on. Under these last
OS's, you see lots of users who refuse to learn anything about subdirectories,
but put all their files in the root directory. Is this because the concept of
a subdirectory is so inherently difficult that the users just can't understand
it? Possibly - but I haven't seen a single Macintosh user (at least not one
with a hard disk) who didn't understand and use folders.

Folders on the Mac are not in any way less powerful than subdirectories under
Unix. Anything you can do with subdirectories, you can do with folders.
However, the Mac finder provides a much more intuitve model for them than Unix
does - on the Mac, you click on a folder, it opens, and you see what's inside.
Under Unix, you don't open a subdirectory, you `move' to it, by issuing a
strange `cd' command, and you're never really sure which subdirectory you're
in, or what a subdirectory really *is*, for that matter. The point is: to use
the Unix subdirectories, you have to have some *udnerstanding* of how the
directory tree works. To get that understanding, you must spend some time
reading manuals, or having it explained to you. On a Mac, the concept of a
folder is intuitively obvious once you've played around with the finder for
five minutes or so (at least to any reasonably intelligent user. There are
always exceptions). This is a good example of computers (or, rather, user
interfaces) being adapted to users *without* losing anything in the process.


Followups to comp.misc, please - this hasn't got anything to do with 
architecture any more!

Magnus Olsson                   | \e+      /_
Dept. of Theoretical Physics    |  \  Z   / q
University of Lund, Sweden      |   >----<           
Internet: magnus@thep.lu.se     |  /      \===== g
Bitnet: THEPMO@SELDC52          | /e-      \q

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (01/30/91)

In article <1991Jan30.100611.6787@lth.se> magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) writes:
>
>One good example of what I meant with my comparison of Unix and Macs is the 
>following: 
>
>The concept of `folders' on the Mac is identical to what is called 
>subdirectories under Unix, MS-DOS, VMS and so on. 

True, the concept is.

>Folders on the Mac are not in any way less powerful than subdirectories under
>Unix. 


But that is NOT true: there is no way on a Mac (using Finder, not MPW)
to go from one open folder, back through the root directory,
and up another multilayered tree into a presently closed directory
directly, in one step.  (cd ..\..\graphics\Macpaint\dirty pictures (please
note that Mac folder names can have spaces in them))


This is the most basic, absolutely fatal, flaw of the Mac idea.


Doug McDonald

jlg@lanl.gov (Jim Giles) (01/31/91)

From article <27A58CDB.5E5@tct.uucp>, by chip@tct.uucp (Chip Salzenberg):
> [...]
> Users who learn willingly are more likely to pick up something useful
> (to them) than those who must have the value of each skill proven to
> them before they'll deign to learn it.

Yes.  Except that there's a _huge_ body of skills out there to be learned.
_MOST_ of these other skills are _obviously_ more useful than the UNIX
'tools' stuff.  The most productive programmers I have known here have
only recently been introduced to UNIX (most think it's horrible).  I'd
rather learn what _they_ know than learn UNIX.

J. Giles

tj@Alliant.COM (Tom Jaskiewicz) (01/31/91)

In article <1991Jan30.094646.6510@lth.se> magnus@thep.lu.se (Magnus Olsson) writes:
>What about TeX? It's free.

FREE??

You mean it doesn't cost me any disk space??
You mean I don't have to spend any time learning how to use it??

This is a new definition of "free" of which I was not previously aware!
-- 
##########################################################################
# The doctrine of nonresistance against arbitrary power, and oppression is
# absurd, slavish, and destructive of the good and happiness of mankind.
#   -- Article 10, Part First, Constitution of New Hampshire

pkraus@bonnie.ics.uci.edu (Pamela Joy Kraus) (01/31/91)

In article <1991Jan30.153036.25723@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
>
>In article <1991Jan30.100611.6787@lth.se> magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) writes:
>>
>>One good example of what I meant with my comparison of Unix and Macs is the
>>following:
>>
>>The concept of `folders' on the Mac is identical to what is called
>>subdirectories under Unix, MS-DOS, VMS and so on.
>
>True, the concept is.
>
>>Folders on the Mac are not in any way less powerful than subdirectories under
>>Unix.
>
>
>But that is NOT true: there is no way on a Mac (using Finder, not MPW)
>to go from one open folder, back through the root directory,
>and up another multilayered tree into a presently closed directory
>directly, in one step.  (cd ..\..\graphics\Macpaint\dirty pictures (please
>note that Mac folder names can have spaces in them))
>
>
>This is the most basic, absolutely fatal, flaw of the Mac idea.
>
>
>Doug McDonald

     The feature is built into the Finder but is disabled. A program
such as Layout 1.9 will switch the bit on for you (it's in a pref file
somewhere). Once enabled, you double-click on a window's title bar to take
you up one level in the hierarchy (a lot like 'dot dot'). It works
nicely enough, considering how often it actually comes up for me in a day's
work (perhaps this says something about the way people should organize their
files, IE it's a lot easier under Finder to say 'these three folders really
belong over HERE' (click...drag)' than it is with UNIX commands).

Another useful feature is offered by a shareware program called Boomerang,
which installs as an addition to the standard-file dialog. It remembers
which directories/folders you've been in recently (even across power downs)
and lets you hop back and forth via a pop-up menu. I find this actually
more efficient then typing in 50-character pathnames, and occasionally
mistyping same.

In short, I wouldn't call the Finder's interface fatally flawed.

mike (Michael Stefanik) (01/31/91)

In article <12953@lanl.gov> lanl.gov!jlg (Jim Giles) writes:
| [...] It's all those poorly designed and
|fairly week UNIX 'tools' that are hard to learn, hard to use, and don't
|do much that's worthwhile that I object to - I don't want to be stuck
|using what some 'guru' tells me to use either.

I won't waste much time on this one: if you think that UNIX tools are
weak, then you don't know how to use them correctly.  'Nuff said there.

|Simple, easy to use, access to the guts of the machine is a worthy
|goal for OS design. [...] systems should be designed to make it
|easy to perform those tasks which people want to do.  You want to
|engage in 'full contact programming' - so the system be designed to
|allow you to do that.  It should _NOT_ be designed to _require_
|everyone else to use the machine the way you want to.  Similarly,
|other users have different needs and the way they work should not
|be forced upon you.  This brings us to the UNIX tools/pipes/shells
|crowd: who want to force _their_ way of woring onto _everybody_.

You're still missing the point about tools, I see. <sigh>  The tools
approach *forces* nothing.  You have the freedom do what you want,
how you want.  The price of admission: an IQ exceeding that of a treestump,
and the interest (and motivation) to learn something new.  If you don't
meet the above critera, then why are you even bothering in the first place?
-- 
Michael Stefanik                       | Opinions stated are not even my own.
Systems Engineer, Briareus Corporation | UUCP: ...!uunet!bria!mike
-------------------------------------------------------------------------------
technoignorami (tek'no-ig'no-ram`i) a group of individuals that are constantly
found to be saying things like "Well, it works on my DOS machine ..."

hrubin@pop.stat.purdue.edu (Herman Rubin) (01/31/91)

In article <12953@lanl.gov>, jlg@lanl.gov (Jim Giles) writes:
> From article <4724@mentor.cc.purdue.edu>, by hrubin@pop.stat.purdue.edu (Herman Rubin):
> > [...]
> > What does happen is that the user is taught that what the guru has put in
> > the language, system, etc., is what the computer is capable of.  The user
> > is deliberately kept from even finding out the capabilities of the hardware,
> > and then the hardware is built in such a way as to make these capabilities
> > difficult and expensive.
 
> You have just vehemently agreed with me!  Learning about the hardware
> and figuring out how to make better use of it is _one_ of the things
> that _is_ worth learning about.  It's all those poorly designed and
> fairly week UNIX 'tools' that are hard to learn, hard to use, and don't
> do much that's worthwhile that I object to - I don't want to be stuck
> using what some 'guru' tells me to use either.
 
> Simple, easy to use, access to the guts of the machine is a worthy
> goal for OS design.  I agree wholeheartedly with you on this.  It's
> only one of the worthy goals though.  Some people already have excess
> capacity on their machines and don't need to push performance any
> more.  These people have different goals than you do.  Strangely,
> one of the few thing that you have in common are that the UNIX style
> of tools hold you both back (for different reasons).

I have not agreed with you.  Access to the guts of the machine has been
made purposely difficult in all systems which I have seen recently.
It is not for OS design only; user applications need what the gurus,
information suppliers, etc., have not seen fit to make available to
the users.  Then the next stage of hardware designers leaves out operations
simple in hardware and very messy and costly in software.  In many cases,
the design of the 50s with modern technology would outperform the present
stuff.

> So, the point about designing systems for people instead of training
> people for systems is this: systems should be designed to make it
> easy to perform those tasks which people want to do.  You want to
> engage in 'full contact programming' - so the system be designed to
> allow you to do that.  It should _NOT_ be designed to _require_
> everyone else to use the machine the way you want to.  Similarly,
> other users have different needs and the way they work should not
> be forced upon you.  This brings us to the UNIX tools/pipes/shells
> crowd: who want to force _their_ way of woring onto _everybody_.

I believe the systems are doing too much already.  A compiler, editor,
etc., should not be part of a system.  Loaders of fully compiled files,
file manipulation, job allocation, etc., are needed.  The system should
make provision for flexible inclusion of the others.

The UNIX guys are not the only ones.  It is not UNIX which requires 
executable files to have a designation .exe.  The UNIX loaders which
I have used do not restrict object files to have a .o designation.
Try to get documentation for the guts of a machine written with the
idea that an intelligent user will do something which was not anticipated
by the manufacturer or vendor.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

hrubin@pop.stat.purdue.edu (Herman Rubin) (01/31/91)

In article <27A58CDB.5E5@tct.uucp>, chip@tct.uucp (Chip Salzenberg) writes:
> According to jlg@lanl.gov (Jim Giles):
> >If it takes longer to learn to do something than the value of being
> >able to do it - it's _more_ intelligent _not_ to learn it.
> 
> The problem is making that decision intelligently.  Most users -- most
> programmers, for that matter -- have no way to predict accurately the
> future value of any given skill.
> 
> Users who learn willingly are more likely to pick up something useful
> (to them) than those who must have the value of each skill proven to
> them before they'll deign to learn it.

One thing which greatly improved understanding and minimizes time is
anticipation and the use of a logical approach.  At the hardware level,
computers are capable of relatively simple operations, and these
operations need to be combined in useful ways to get results.  A
computer should be looked upon as a fast sub-imbecile doing exactly
what it is told, no matter how stupid the action is.

Not just in using computers, but in just about everything else, it is
not productive to learn basic material when it is needed.  It may not
even be possible to recognize its existence.  Someone who believes that
only a small set of hard-wired fonts can be used is not going to ask for
the ability to change the characters on the screen.  Someone who does not
know that they are dot images is unlikely to realize that the change is
possible.

Again, not just in the computer field, I see everywhere the tendency to
make it difficult for people to even get the information, and to instead
invoke dependence on experts to tell them what to do.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

chip@tct.uucp (Chip Salzenberg) (02/01/91)

According to jlg@lanl.gov (Jim Giles):
>... there's a _huge_ body of skills out there to be learned.

No argument.

>_MOST_ of these other skills are _obviously_ more useful than the UNIX
>'tools' stuff.

The burden of proof is on you.  Prove this assertion, if you can.

>The most productive programmers I have known here have
>only recently been introduced to UNIX (most think it's horrible).

This anecdotal evidence proves nothing.  Anyone who changes
environments will notice only the features missing from their old
environment, since the new and potentially useful features aren't yet
a part of their work patterns.  Thus, initial reactions to an
environment change will almost always be negative.  No surprise here.

>I'd rather learn what _they_ know than learn UNIX.

You would trust UNIX neophytes to evaluate the value of UNIX?
To me, that decision appears very unwise.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "I want to mention that my opinions whether real or not are MY opinions."
             -- the inevitable William "Billy" Steinmetz

schumach@convex.com (Richard A. Schumacher) (02/01/91)

Funny thing about that subset of users called customers: they're
the ones with the money. They have to be convinced to buy your stuff. 
Often they won't sit still long for a lot of lecturing about what 
you think they should want and use versus what they DO use and what
THEY think they want. If they go elsewhere with their money, who is 
the loser?

jlg@lanl.gov (Jim Giles) (02/01/91)

From article <409@bria>, by mike@bria:
> In article <12953@lanl.gov> lanl.gov!jlg (Jim Giles) writes:
> | [...] It's all those poorly designed and
> |fairly week UNIX 'tools' that are hard to learn, hard to use, and don't
> |do much that's worthwhile that I object to - I don't want to be stuck
> |using what some 'guru' tells me to use either.
> 
> I won't waste much time on this one: if you think that UNIX tools are
> weak, then you don't know how to use them correctly.  'Nuff said there.

Ok, you're the expert.  Let's hear how to do asynchronous I/O on UNIX
(not _buffered_ through the system - _real_ asynchronous I/O)?  Oh,
that's right, UNIX can't do that.

Well, let's hear how to tell the system not to kill my active processes
when the system goes down?  Oh, UNIX doesn't have any automatic crash
recovery.

How about reconnect (you know, my modem drops off in the middle of a job
and I want to sign on and reconnect to the job)?  Oh, UNIX automatically
kills jobs when connection fails except when the job is 'nohup'ed - but,
even then, you can't reconnect: the only thing you can do with a
'nohup'ed job from a _different_ terminal session is kill it.

Ok, how about error recovery?  Job fails (after, say, an hour of CPU
time) with a divide-by-zero error.  I want to go into the 'core' file
and edit out the zero and restart the job from where it left off.  Oh,
UNIX can't restart 'core' files.

Etc. etc..  Yes, you've really convinced me.  UNIX is _really_ useful.
(Heavy sarcasm - in case you don't get it.)

> [...]
> You're still missing the point about tools, I see. <sigh>  The tools
> approach *forces* nothing.  You have the freedom do what you want,
> how you want.  The price of admission: an IQ exceeding that of a treestump,
> and the interest (and motivation) to learn something new.  [...]

What?  NONE of the UNIX tools do anything _NEW_.  They are just poorly
designed and hard to use versions of utilities that every system I've
ever seen has versions of.  The difference is that most other systems
are _easier_ to learn and to use than the UNIX stuff is.  The thing is,
people spend so much time and effort to learn UNIX tools that they get
a feeling of accomplishment from them.  This feeling of accomplishment
comes from the complexity of what they've had to learn - not from the
value of it (sort of like winning ZORK - as I've said before).  In the
end, they can only do those things which people on other systems can do
easier.

Things are improving on UNIX.  Most workstations now come with GUI's
and other high-level interfaces whic conceal or bypass the usual UNIX
tools stuff.  The problem with these is: once they are in place it
becomes possible to provide them _instead_ of the UNIX stuff - in fact
you don't need UNIX itself anymore.  Now, that _would_ be a step forward.

J. Giles

davoli@natinst.com (Russell Davoli) (02/01/91)

In article <1991Jan30.153036.25723@ux1.cso.uiuc.edu>, mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
> 
> In article <1991Jan30.100611.6787@lth.se> magnus%thep.lu.se@Urd.lth.se (Magnus Olsson) writes:
> >The concept of `folders' on the Mac is identical to what is called 
> >subdirectories under Unix, MS-DOS, VMS and so on. 
> 
> >Folders on the Mac are not in any way less powerful than subdirectories under
> >Unix. 
> 
> 
> But that is NOT true: there is no way on a Mac (using Finder, not MPW)
> to go from one open folder, back through the root directory,
> and up another multilayered tree into a presently closed directory
> directly, in one step.  (cd ..\..\graphics\Macpaint\dirty pictures (please
> note that Mac folder names can have spaces in them))
> 
> 
> This is the most basic, absolutely fatal, flaw of the Mac idea.

I'm not so convinced this is such a fatal flaw.  It is possible to set a bit
in the LAYO (I think) resource in the Finder to allow a double click on the
title bar of a folder window to open the parent folder window.  True, you
might have to double-click a bunch, but that seems comparable to having to
type all those directory names (not to mention remembering the subdirectory
structure to begin with.)  Besides, if you're starting from the root directory,
the disk icon are always on the right hand side of the screen in an easy
place to open the root folder window.

The really great thing about any graphical system is that you can keep windows
open to folders arbitrarily deep in the filesystem and switch directories
directories with a click, rather than typing a long string of directory names.

BTW, is there a different place to be arguing the finer points of user
interfaces?  Seems like this topic strays a bit far from computer
architecture.

- Russell

dik@cwi.nl (Dik T. Winter) (02/01/91)

In article <13252@lanl.gov> jlg@lanl.gov (Jim Giles) writes:

You should take in consideration that Unix is already quite old, and was in
its time far ahead of the then current systems.  Still is far ahead when
compared to some current systems; but then I may only have been exposed
to dynosaurs of course.  Also, these are not problems with UNIX perse.  These
are problems with the implementations currently provided.  It is not impossible
to create a version of UNIX with everything you mention (and it would still
be Unix).

 > Ok, you're the expert.  Let's hear how to do asynchronous I/O on UNIX
 > (not _buffered_ through the system - _real_ asynchronous I/O)?  Oh,
 > that's right, UNIX can't do that.
I got the impression that Cray's Unicos could do it.  And Unicos was an
implementation of UNIX the last time I looked.
 > 
 > Well, let's hear how to tell the system not to kill my active processes
 > when the system goes down?  Oh, UNIX doesn't have any automatic crash
 > recovery.
Neither do NOS/BE and VM/CMS last time I tried.  But Unicos can do this.
 > 
 > How about reconnect (you know, my modem drops off in the middle of a job
 > and I want to sign on and reconnect to the job)?  Oh, UNIX automatically
 > kills jobs when connection fails except when the job is 'nohup'ed - but,
 > even then, you can't reconnect: the only thing you can do with a
 > 'nohup'ed job from a _different_ terminal session is kill it.
Yes, NOS/BE was infinitely better.  The system did not even know that the
connection was dropped, so you could not log in because, according to the
system, you was already logged in.  Could take some hours.  Under VM/CMS
I have not yet been able to figure out how to do a reconnect.  Anyhow,
reconnecting to running processes could be implemented given enough
incentive.
 > 
 > Ok, how about error recovery?  Job fails (after, say, an hour of CPU
 > time) with a divide-by-zero error.  I want to go into the 'core' file
 > and edit out the zero and restart the job from where it left off.  Oh,
 > UNIX can't restart 'core' files.
I could not do that under NOS/BE either.  Worse, when doing it as a batch job,
all my intermediate output would be lost because I would have to catalog the
file.  (Same holds for NOS/VE.)
 > 
 > What?  NONE of the UNIX tools do anything _NEW_.  They are just poorly
 > designed and hard to use versions of utilities that every system I've
 > ever seen has versions of.
Perhaps nothing _NEW_.  But suppose I have a private library containing a
routine I want to change.  How do I find all calls to that routine in my
files so that I can verify the change will not have nasty side effects?
Give your favourite example for: NOS/BE, NOS, NOS/VE, VM/CMS, COS, or
your favourite system.  In Unix it is:
	find . -exec grep routine-name {} /dev/null \;
Yes it is tricky, and the syntax is painful (as some manuals for find will
confess), but it works.  And this is only a single example.

What about screen editors that normally overwrite the text that is present,
and where you have first to create space in order to be able to insert
stuff?  Like: oh, there should be a L between that O and D, good; position
the cursor, enter the Home key, give the command INSERT 1, and enter the L?
Yes, such editors do exist; on some it is in many cases easier to retype the
remainder of the line with another possible source of errors.  At least UNIX
got some things correct.  Vi is already pretty old and still useful (although
a number of people prefer other editors).

I agree the user interface could be improved; but in my opinion the basic tools
strategy is sound.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

john@newave.UUCP (John A. Weeks III) (02/01/91)

In <1991Jan30.153036.25723@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu:
> In <1991Jan30.100611.6787@lth.se> magnus%thep.lu.se@Urd.lth.se:

> > Folders on the Mac are not in any way less powerful than subdirectories
> > under Unix. 

> But that is NOT true: there is no way on a Mac (using Finder, not MPW)
> to go from one open folder, back through the root directory,
> and up another multilayered tree into a presently closed directory
> directly, in one step.  (cd ..\..\graphics\Macpaint\dirty pictures (please
> note that Mac folder names can have spaces in them))

The Mac file browzer, as with all Mac interfaces, is fairly easily extended
using Inits, CDev, and the like.  In fact, there is at least one very
popular program that modifies the Mac file browzer.

Given this, it seems to me that if "cd ..someverylargeamountoftyping"
was needed on the Mac, someone would have provided this capability
somewhere along the line.

Do you think anyone would really prefer to type a 40 character path
name if they could get there in 5 mouse clicks?  And since Mac file names
can be very long, the path is likely to be many more characters.  And
then try to remember the exact spelling of all of the folder names.

Not likely.  Especially not for us dylexics.

> This is the most basic, absolutely fatal, flaw of the Mac idea.

I think I will toss my mac off of my patio and run out and buy windows 3.

-john-

-- 
===============================================================================
John A. Weeks III               (612) 942-6969               john@newave.mn.org
NeWave Communications                 ...uunet!rosevax!tcnet!wd0gol!newave!john
===============================================================================

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/01/91)

In article <13252@lanl.gov> jlg@lanl.gov (Jim Giles) writes:

| What?  NONE of the UNIX tools do anything _NEW_.  They are just poorly
| designed and hard to use versions of utilities that every system I've
| ever seen has versions of.  

  That's great! We've been spending thousands of dollars to gt unix
tools for other systems, and all this time they were right there.

  So what programs under MS-DOS, AmigaDOS, MaxIntosh, VMS, AOS, CMS and
JPL correspond to awk, sed, yacc, diff, and grep? Other than VMS, which
has a diff which produces an output which is only human readable, and a
search command which lacks powerful pattern matching, the capabilities
seem... well *missing* is the first word which comes to mind.

-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

chip@tct.uucp (Chip Salzenberg) (02/01/91)

According to jlg@lanl.gov (Jim Giles):
>... asynchronous I/O ...
>... automatic crash recovery ...
>... reconnect ...
>... error recovery ...

These are *kernel* features, unrelated to the UNIX tool philosophy.
(Though they can sometimes be emulated incompletely; Dan Berstein's
pty tool allows reconnection after hangup.)

>NONE of the UNIX tools do anything _NEW_.

Novelty is not necessarily a virtue.  The fact that UNIX tools do
ordinary, mundane things is the precise reason for their utility: a
given complex task can often be accomplished by the execution of a
sequence of mundane operations -- tools -- in a particular order.

>People spend so much time and effort to learn UNIX tools that they get
>a feeling of accomplishment from them.  This feeling of accomplishment
>comes from the complexity of what they've had to learn - not from the
>value of it (sort of like winning ZORK - as I've said before).

Hardly.  I feel satisfaction, not from having learned a UNIX tool, but
from having used it.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "I want to mention that my opinions whether real or not are MY opinions."
             -- the inevitable William "Billy" Steinmetz

mike (Michael Stefanik) (02/02/91)

In article <13252@lanl.gov> lanl.gov!jlg (Jim Giles) writes:
>Ok, you're the expert.  Let's hear how to do asynchronous I/O on UNIX
>(not _buffered_ through the system - _real_ asynchronous I/O)?  Oh,
>that's right, UNIX can't do that.

[ continues onward with such drivel ... ]

Should you forget, UNIX is a multiuser operating system; it's entire
thrust is towards providing a stable environment.  Allowing one process
to exclusively attach itself to a device without any type of mitigation
on the part of the kernel violates this stability.

>What?  NONE of the UNIX tools do anything _NEW_.  They are just poorly
>designed and hard to use versions of utilities that every system I've
>ever seen has versions of.  [...]

Again, this is a matter of personal taste.  *I* find the tools to be
simple, consise, and do what they're told.  The tools philosophy is
one tool for one job.  You don't use a hammer to pound in a screw (although
you could) when a screwdriver will do quite nicely.  And unlike many
other systems, the UNIX toolbox is full to overflowing.  I would much
prefer too many tools as opposed to too few.  And then, who am I to say
that, because I don't use a particular tool, that someone else won't.

>Things are improving on UNIX.  Most workstations now come with GUI's
>and other high-level interfaces whic conceal or bypass the usual UNIX
>tools stuff.  The problem with these is: once they are in place it
>becomes possible to provide them _instead_ of the UNIX stuff - in fact
>you don't need UNIX itself anymore.  Now, that _would_ be a step forward.

It is obvious that UNIX is not an operating system that suites your
personal taste about what an operating system should be.  So, why are
you bothering to use it, argue against it, and waste bandwidth? 

-- 
Michael Stefanik                       | Opinions stated are not even my own.
Systems Engineer, Briareus Corporation | UUCP: ...!uunet!bria!mike
-------------------------------------------------------------------------------
technoignorami (tek'no-ig'no-ram`i) a group of individuals that are constantly
found to be saying things like "Well, it works on my DOS machine ..."

gil@banyan.UUCP (Gil Pilz@Eng@Banyan) (02/02/91)

In article <schumach.665346300@convex.convex.com> schumach@convex.com (Richard A. Schumacher) writes:
>Funny thing about that subset of users called customers: they're
>the ones with the money. They have to be convinced to buy your stuff. 
>Often they won't sit still long for a lot of lecturing about what 
>you think they should want and use versus what they DO use and what
>THEY think they want. If they go elsewhere with their money, who is 
>the loser?

Clearly if they are incapable of _listening_ and _learning_ THEY will
be the ones to lose when the systems they purchase turn out to be
cubersome, kludgy monoliths capable of doing one thing and one thing
only. If the system you're trying to sell them really does a better
job (and you can prove it !) then somebody, somewhere will buy it, use
it, and procede to tromp all over Joe-Don't-Tell-Me-What-I-Want=I-
Know-All-About-Computers.

Obviously it's a two way street. Designers *HAVE* to listen to their
customers for real-world feedback but a smart customer will listen to
the designers to find out where they're going and what's currently
available/possible.

Needless complexity is clearly an evil but mindless simplicity is not
it's answer. Complex problems cannot be solved simply. Computers are
not magic ! People who buy systems on the idea that some single magic
widget will automatically make their complex, shifting mish-mosh of
various problems disappear diserve the kind of systems they get.

now when it starts to working all the girls unlock their doors
even my best friends woman wants to show me what it's for
she likes my mojo much better than his
but even she can't tell me what a mojo is
	- the young fresh fellows

Gilbert Pilz Jr.    "I'd rather be lucky than good _any_ day."
gil@banyan.com

ddean@rain.andrew.cmu.edu (Drew Dean) (02/02/91)

Since this is comp.arch, can we please move this discussion over to
comp.os.misc.

And now back to our own religious wars...:-)

Thanks,
-- 
Drew Dean
Drew_Dean@rain.andrew.cmu.edu
[CMU provides my net connection; they don't necessarily agree with me.]

sef@kithrup.COM (Sean Eric Fagan) (02/02/91)

In article <2880@charon.cwi.nl> dik@cwi.nl (Dik T. Winter) writes:
>> Ok, how about error recovery?  

You're barking up the wrong tree, here.  The example you give is *not* error
recovery, it's trying to work around a bug in your code.

>>Job fails (after, say, an hour of CPU
>> time) with a divide-by-zero error.  

Either a) properly code your program, b) catch the signal and do something
meaningful, or c) do something machine-specific.

>> I want to go into the 'core' file
>> and edit out the zero and restart the job from where it left off.  
>> Oh,
>> UNIX can't restart 'core' files.

Neither can any other system I've seen (except some batch systems).  Mostly
because they did not always have the equivalent of a core-dump.  NOS
provided a way to do this, I believe, but you ended up doing what you would
do in UNIX anyway:  turn the "core" into an executable after editing it, and
patching things up such that it went the way you wanted it to.

However, even in NOS, this could cause problems, depending on what files the
program had open (direct access file, for example; it could be locked, and
would be returned when the process died, if I remember correctly).  UNIX has
the "problem" manyfold, since there are not "local" files.

>But suppose I have a private library containing a
>routine I want to change.  How do I find all calls to that routine in my
>files so that I can verify the change will not have nasty side effects?
>Give your favourite example for: NOS/BE, NOS, NOS/VE, VM/CMS, COS, or
>your favourite system.  In Unix it is:
>	find . -exec grep routine-name {} /dev/null \;

Ok.  Assuming you had, under NOS, a script to do the build, I would do:

	<scriptname>

After all, what's the use of an incredibly fast machine if you don't
excercise it every once in a while? 8-)

-- 
Sean Eric Fagan  | "I made the universe, but please don't blame me for it;
sef@kithrup.COM  |  I had a bellyache at the time."
-----------------+           -- The Turtle (Stephen King, _It_)
Any opinions expressed are my own, and generally unpopular with others.

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/02/91)

In article <3169@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
> In article <13252@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
> 
> | What?  NONE of the UNIX tools do anything _NEW_.  They are just poorly
> | designed and hard to use versions of utilities that every system I've
> | ever seen has versions of.  
> 
>   That's great! We've been spending thousands of dollars to gt unix
> tools for other systems, and all this time they were right there.
> 
>   So what programs under MS-DOS, AmigaDOS, MaxIntosh, VMS, AOS, CMS and
> JPL correspond to awk, sed, yacc, diff, and grep? Other than VMS, which
> has a diff which produces an output which is only human readable, and a
> search command which lacks powerful pattern matching, the capabilities
> seem... well *missing* is the first word which comes to mind.

What do these tools have to do with the UNIX operating system?  Possibly
there are some legal restrictions in some cases, but I believe most of
these are public domain.  At most, interfaces would have to be rewritten.

The hardware provides the capabilities.  As far as possible, software
should allow the user access to these capabilities.  Except for security
restrictions, and possibly some restrictions to prevent physical damage
to the machine, software should allow whatever hardware allows, instead
of restricting it.

As far as grep is concerned, the only UNIX part in the interface is handling
file access and the ability to display file names.  I agree that any 
operating system needs this, and the more flexible the better.  This
means reading directories, and the possibility of accessing files only
found by reading directories.  That is part of the operating system.
Getting line numbers, etc., is not.

The hardware manubacturers should help the users use the hardware; the
systems designers should make the systems flexible so that tools can be
easily inserted or replaced, and the tool designers should make the tools
so that they do not restrict the users.  To much to great an extent, all 
of these are being violated to a very great extent.

--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

ccplumb@rose.uwaterloo.ca (Colin Plumb) (02/03/91)

sef@kithrup.COM (Sean Eric Fagan) wrote:
>>> I want to go into the 'core' file
>>> and edit out the zero and restart the job from where it left off.  
>>> Oh,
>>> UNIX can't restart 'core' files.
>
>Neither can any other system I've seen (except some batch systems).  Mostly
>because they did not always have the equivalent of a core-dump.  NOS
>provided a way to do this, I believe, but you ended up doing what you would
>do in UNIX anyway:  turn the "core" into an executable after editing it, and
>patching things up such that it went the way you wanted it to.

Well, "undump" works in Unix to restart a core file, but from my
reading of the docs, Mach's "macho" file format is designed as both a
load format and a core dump.  So you could have multiple threads
running around, on calls abort(), and spits out a core file.  You could
then restart everything with "./core".  I think the existing 4.3
wrapper uses differnt formats for its core dumps, but the loader will
understand a macho file (as well as Berkeley a.out format) if you feed
it to it.
-- 
	-Colin

rhealey@digibd.com (Rob Healey) (02/05/91)

In article <13093@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
>Yes.  Except that there's a _huge_ body of skills out there to be learned.
>_MOST_ of these other skills are _obviously_ more useful than the UNIX
>'tools' stuff.  The most productive programmers I have known here have
>only recently been introduced to UNIX (most think it's horrible).  I'd
>rather learn what _they_ know than learn UNIX.
>
	What were the programmers using before for an environment? What
	kind of programs were they productive at? Did the kind of programs
	they had to produce change at the same time the environment was changed?

	Most probably think it's horrible because it's not based on the
	model they've used up till now. UNIX can be made into many
	different environments, with this flexibility comes the complexity
	of deciding what parts are useful and what aren't. Maybe they
	aren't comfortable with a system that doesn't demand you do
	things in one way and one way only. Too many choices to make without
	enough experience to make them?

	In my 6 or so years working with UNIX I've changed my programming
	environment 4 or 5 times. Now with window interfaces things can
	be changed to have multiple environments inside one overall
	window environment. It takes a while to find the combination of UNIX
	tools and environment that work best with the job a programmer or
	computer scientist has to do. If your programmers are so experienced
	I'm surprised they wouldn't realize this, it happens anytime you
	have to switch environments no matter what OS you are using.

		-Rob Healey


Speaking for self, not company.

rhealey@digibd.com (Rob Healey) (02/05/91)

In article <4898@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>In article <12953@lanl.gov>, jlg@lanl.gov (Jim Giles) writes:
>> be forced upon you.  This brings us to the UNIX tools/pipes/shells
>> crowd: who want to force _their_ way of woring onto _everybody_.
>
>I believe the systems are doing too much already.  A compiler, editor,
>etc., should not be part of a system.  Loaders of fully compiled files,
>file manipulation, job allocation, etc., are needed.  The system should
>make provision for flexible inclusion of the others.
>
	So, all you have to do is write your own user interface, aka shell,
	that works the way you want it, then substitute it into the
	password file as the environment to run by default. Alternately,
	have the .profile or .login exec the environment.

	Now, write your own idea of what edit/compiler/loader should
	be. There's nothing sacred about the UNIX loader, read up on the
	format of an executeable and roll your own if you don't like
	ld. Usually in /usr/include/a.out and friends.

	If you're complaining about UNIX not providing YOUR idea of
	what should be provided then that's unfair. VMS, VM and any other
	proprietary OS I've ever used doesn't allow me to replace my WHOLE
	environment the way I want to. You're pretty much SOL if you want to
	make your own edit/compiler/loader system and changing your
	command line interpreter is rarely as easy as editing one file
	or exec'ing the new environment.

	By changing my default command line interpreter under UNIX I
	can make UNIX look and work any way I please. That's not to say
	it would be trivial to write a whole new shell that presents the file
	system, process and I/O systems in a completely different manner, I'm
	only saying UNIX is the only OS I've worked with that would let me
	replace the WHOLE smash if I was that determined.
	
	What you seem to refer to as UNIX: pipes, tools and shells, can ALL be
	replaced or taken away so you never have to see them. If you want to
	write a command line interpreter that uses mailboxes instead of
	pipes your perfectly free to do so. If you want to rewrite the
	whole set of tools available to you, you can do that too.

	UNIX is the OS and provides basic I/O and job scheduling along
	with support calls for implementing user environments. Everything
	above the kernel call level can be replaced with whatever you
	want. Better than that, you can replace just bits and pieces you
	don't like.

	If you don't like what ships with stock UNIX systems, write your own
	or buy replacements! No law is FORCING you to use what comes on the
	OS floppys.

	Personally, I'd write my own command line interpreter to take
	commands the way you want them and change tham into the format
	that exec(2) family of functions wants. Also, turn off shell globbing
	and write a library that parses the command line the way the
	PROGRAM wants it parsed. None of this is written in stone by
	the system, you CAN change it if you want to.

	The UNIX OS doesn't force you to do things in a certain way, beyond 
	what any OS kernel forces on you. You can replace as much or as little
	as your talent and $$$ will let you.

	The statement that UNIX OS FORCES you to do something, binary executable
	format, is applicable to ALL computer kernels. The difference between
	UNIX and other OS's is that the WHOLE user interface, compiler,
	librarys and support programs CAN be chucked if you don't like 'em, the
	kernel know's ziltch about any of it. OK, it knows about the envp
	stuff but that's support for a user interface and can be used however
	a CLI wants, it can even be ignored.

	You can't change the executable format of a binary but the last time I
	checked other big OS's wouldn't let you do that either.

	So, what all this comes down to is how much your willing to put
	into changing your environment. The actual OS provides the basics
	for whatever you want. The stock user interfaces are based on
	the pipe and shell model rather than the monolithic, "I know
	better than you what you want and how it should be done", model.
	If you want the monolithic model or some combination of the two
	write your own from scratch or write librarys and turn off the
	shell features that interfere with the library, most likely
	globbing and quoting.

	If you absolutely can't deal with UNIX, then stay with what you're
	used to. Changing to UNIX for the hell of it might not be right
	for you. However, I've found UNIX to be much more flexible than
	any other micro or mainframe OS I've ever used.

	Maybe we should start a list of features we like and hate in
	ALL OS's. That way, someone can write an OS that we'll ALL be
	happy with!

		-Rob Healey

Speaking for self, not company.

rhealey@digibd.com (Rob Healey) (02/05/91)

In article <4455@alliant.Alliant.COM> tj@Alliant.COM (Tom Jaskiewicz) writes:
>In article <1991Jan30.094646.6510@lth.se> magnus@thep.lu.se (Magnus Olsson) writes:
>>What about TeX? It's free.
>FREE??
>You mean it doesn't cost me any disk space??
>You mean I don't have to spend any time learning how to use it??
>
	OK, I'll bite:

	Any publishing system you choose will take up disk space, time
	to learn, printer resources and supplys and maintainance. TeX
	will cost you less for the original code. You also get the source
	so you can fix something if it's broke rather than hoping that
	your vendor will take pity on you and fix the problem n+1
	time periods hence.

		Raising the rabble once again,

		-Rob

Speaking for self, not company.

rhealey@digibd.com (Rob Healey) (02/05/91)

In article <schumach.665346300@convex.convex.com> schumach@convex.com (Richard A. Schumacher) writes:
>Funny thing about that subset of users called customers: they're
>the ones with the money. They have to be convinced to buy your stuff. 
>Often they won't sit still long for a lot of lecturing about what 
>you think they should want and use versus what they DO use and what
>THEY think they want. If they go elsewhere with their money, who is 
>the loser?

   Mr. Schumacher is correct. Most successful solutions require the end user
   to log in, enter a password and their application just works. No commands
   to type in, no muss, no fuss. For sincgle user applications, they move
   the big red switch one way and their application starts up, they move it
   the other and it shuts down. They don't give a rat's behind HOW it
   happens, only that it DOES happen and they can use it as easily as
   their other desk tools.

   On the other hand, some customers are weird ones called "Engineers,
   scientists and programmers." They can have a very pesky habit of
   wanting to know HOW it works, not just that it DOES work...

		-Rob

Speaking for self, not company

jlg@lanl.gov (Jim Giles) (02/05/91)

From article <27A84C5C.24EF@tct.uucp>, by chip@tct.uucp (Chip Salzenberg):
> According to jlg@lanl.gov (Jim Giles):
> [...]
>>The most productive programmers I have known here have
>>only recently been introduced to UNIX (most think it's horrible).
> 
> This anecdotal evidence proves nothing.  Anyone who changes
> environments will notice only the features missing from their old
> environment, since the new and potentially useful features aren't yet
> a part of their work patterns.  Thus, initial reactions to an
> environment change will almost always be negative.  No surprise here.

Exactly.  Which is why the part of my article that you _cut_ is
relevant here.  The people I'm talking have used _many_ different
systems and have switched many times.  They _know_ what's involved
in moving to a new system.  They _have_ learned a lot about the
UNIX environment (in spite of only recent exposure - for most systems,
"recent" would be defined as "the last few weeks", on UNIX the definition
is more like "the last year or so"; because UNIX is _MUCH_ harder to
come to speed on).  The conclusion is that UNIX does _not_ have
sufficient capability to offset those features it lacks.

But, your point is quite accurate and explains why there's so much
resistence to change from UNIX users.  Most of them have never used
another system - or only VMS or MS-DOS - and are _very_ negative
about any change to _their_ status quo.

> [...]
>>I'd rather learn what _they_ know than learn UNIX.
> 
> You would trust UNIX neophytes to evaluate the value of UNIX?
> To me, that decision appears very unwise.

I gave them as a data point.  My judgement of UNIX is my own.  About
6 years ago (when I first got my workstation), I spent lots of time
learning UNIX.  I got to be fairly good.  Fortunately, most of that
garbage has now faded from memory.  However, since joining this
discussion, a lot of UNIX supporters have sent me examples of stuff
to "prove" how powerful UNIX is.  These examples have certainly been
enough to refresh my memory: they all do something trivial or useless,
and they all do so in a _very_ arcane manner.  One person who posted
to the net said he had an "epiphany" from a shell script (which used
4 commands and a script that looked like line noise) which renamed
all his '.pas' files so that they ended with '.p' instead.  I reserve
my religious ecstasy for something more than renaming files.  And,
indeed, that is my memory of UNIX tools - you spend all your time
learning to do complex and peculiar things that are, in the end, not
really all that impressive.  I decided I'd rather learn to get some
_real_ work done.


J. Giles

les@chinet.chi.il.us (Leslie Mikesell) (02/05/91)

In article <5038@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:

>>   So what programs under MS-DOS, AmigaDOS, MaxIntosh, VMS, AOS, CMS and
>> JPL correspond to awk, sed, yacc, diff, and grep? Other than VMS, which
>> has a diff which produces an output which is only human readable, and a
>> search command which lacks powerful pattern matching, the capabilities
>> seem... well *missing* is the first word which comes to mind.

>What do these tools have to do with the UNIX operating system?  Possibly
>there are some legal restrictions in some cases, but I believe most of
>these are public domain.  At most, interfaces would have to be rewritten.

What unix provides is the ability to easily tie these tools together
so that you can combine the operations of the individual tools.  How
would you propose to get an equally easy-to-use interface in a
single-tasking environment like MSDOS or CMS?

>The hardware provides the capabilities.  As far as possible, software
>should allow the user access to these capabilities.  Except for security
>restrictions, and possibly some restrictions to prevent physical damage
>to the machine, software should allow whatever hardware allows, instead
>of restricting it.

Exactly, and if you are inclined toward the toolbox approach, that
means the OS should provide multi-tasking, virtual memory, and
an efficient means to connect the output of one program to the input
of another.  Also, you need a quick and easy way to automate any
portion of a task that the operator finds to be repetitive.

>As far as grep is concerned, the only UNIX part in the interface is handling
>file access and the ability to display file names.  I agree that any 
>operating system needs this, and the more flexible the better.  This
>means reading directories, and the possibility of accessing files only
>found by reading directories.  That is part of the operating system.
>Getting line numbers, etc., is not.

Wrong!  The useful function has nothing to do with directory reading
because it doesn't do any (at least under unix where the shell normally
expands wildcard filenames).  It has much more to do with the fact
that grep can be used to parse the output of any other program that
generates a stream of lines, and in turn its output (and exit status
indicating sucess of a match) can be used by any other program.

While it is possible to approximate the functionality of pipes using
temporary files, there are many situations where this is impractical.
For example the total amount of data may be larger than the available
storage, or you may wish to interrupt the processing as soon as
the first output appears.

Les Mikesell
  les@chinet.chi.il.us

jlg@lanl.gov (Jim Giles) (02/05/91)

From article <2880@charon.cwi.nl>, by dik@cwi.nl (Dik T. Winter):
> In article <13252@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
> 
> You should take in consideration that Unix is already quite old, and was in
> its time far ahead of the then current systems.  [...]

I didn't write this!  You should have moved the tag line down so that
it introduced the quote from _my_ article, not what you said.  I wouldn't
have wrote the above.  I don't believe it is true.

> [...]                                                      It is not impossible
> to create a version of UNIX with everything you mention (and it would still
> be Unix).

In what way?  By that logic, I'm still really a lemur-like creature which
lived alongside the dinosaurs.  Sure, I've been extended to include lots
of other (or enhanced) features, but, by your taxonomy, I'm still a
lemur - and so are you.

J. Giles

jlg@lanl.gov (Jim Giles) (02/05/91)

From article <411@bria>, by <somebody>:
> In article <13252@lanl.gov> lanl.gov!jlg (Jim Giles) writes:
>>Ok, you're the expert.  Let's hear how to do asynchronous I/O on UNIX
>>(not _buffered_ through the system - _real_ asynchronous I/O)?  Oh,
>>that's right, UNIX can't do that.
> 
> [ continues onward with such drivel ... ]
> 
> Should you forget, UNIX is a multiuser operating system; it's entire
> thrust is towards providing a stable environment.  Allowing one process
> to exclusively attach itself to a device without any type of mitigation
> on the part of the kernel violates this stability.

Except for UNIX, every system I've ever used had asynchronous I/O.
Every one of them were multiuser operating systems.  Every one was
_more_ stable than UNIX.  Who is it that's spouting drivel?

Besides, UNIX was NOT designed as a multi-user system.  That feature,
like security, virtual memory, etc. was tacked on.  The system is a
cludge from end to end.

For example, every security person I've ever talked to made the comment
that SUID (set UID) programs are an inherent security flaw.  UNIX has to
have them: they couldn't implement inbuilt security in the system without
changing it - and backward compatibility seems to be the only rule for
UNIX system design.  And I really mean _backward_.

> [...]
> It is obvious that UNIX is not an operating system that suites your
> personal taste about what an operating system should be.  So, why are
> you bothering to use it, argue against it, and waste bandwidth? 

You show me a place I can work, study, or do business where _I_ get
to pick which system to use and I'll go there.  At present, mostly
the non-technical management picks the system - in response to hype
from the outside.  These are the same people who picked OS/360 for
their companies in the 60's.  Now they're picking UNIX.  Their track
record is appalling.  As for wasting bandwidth - it's the hype on this
net which sets the stage for all the misinformation about how "wonderful"
UNIX is.  I think all _that_ is wasted bandwidth.  Somebody has to
counter it.

J. Giles

gl8f@astsun7.astro.Virginia.EDU (Greg Lindahl) (02/05/91)

In article <13615@lanl.gov> jlg@lanl.gov (Jim Giles) writes:

>But, your point is quite accurate and explains why there's so much
>resistence to change from UNIX users.  Most of them have never used
>another system - or only VMS or MS-DOS - and are _very_ negative
>about any change to _their_ status quo.

This is a perfect example of a religious argument -- making broad
generalizations. This isn't architecture. Take it to
alt.religion.computers.

subbarao@phoenix.Princeton.EDU (Kartik Subbarao) (02/05/91)

I still don't understand you Jim. You make some valid comments that perhaps
people who are not accustomed to UNIX might find it hard initially, and
that UNIX users do not like to use other operating systems because they
especially like their environment. One can be productive in different
operating systems, and noone here is saying that all other operating
systems are evil.

>I gave them as a data point.  My judgement of UNIX is my own.  About
>6 years ago (when I first got my workstation), I spent lots of time
>learning UNIX.  I got to be fairly good.  Fortunately, most of that
>garbage has now faded from memory.  However, since joining this
>discussion, a lot of UNIX supporters have sent me examples of stuff
>to "prove" how powerful UNIX is.  These examples have certainly been
>enough to refresh my memory: they all do something trivial or useless,
>and they all do so in a _very_ arcane manner.  

What are these value judgements here? ``Trivial'' -- ``useless'' --
something that you don't remember how to do is trivial and useless. I see.

One person who posted to the net said he had an "epiphany" from a shell script 
>4 commands and a script that looked like line noise) which renamed
>all his '.pas' files so that they ended with '.p' instead.  I reserve
>my religious ecstasy for something more than renaming files.  

I'm sure most people who read this newsgroup don't get ecstatic in renaming
files from .pas to .p. (That is, once they know how :-) )

line noise? I really don't think that:

% foreach i (*.pas)
? mv $i $i:r.p
? end

constitutes any more line noise than your inappropriate comments on choice
of commands.

How would you like to rename files in your favorite operating system?
Please tell us.

I really don't think that what you want to call the move command is really
appropriate in deciding what operating system you like... But then again,
that's just me.

And, >indeed, that is my memory of UNIX tools - you spend all your time
>learning to do complex and peculiar things that are, in the end, not
>really all that impressive.  I decided I'd rather learn to get some
>_real_ work done.

Jim - I *don't* spend time learning to do things that I don't want to do.
For lots of folks, learning UNIX is fun. They like the challenge of
figuring out new ways to do things. But enough hype. UNIX can also be used
by those who just want to get the job done. That's the whole charm of it.
You don't need to be a perl/awk wizard or shell hacker if all you're going to do
is run some software program. What's great about UNIX is that it allows you
to go as far deep as you want. But then again, what's the use of me telling
you this. It's been sed (pun intended :-) ) 1000 times.

What still doesn't cease to amaze me is your persistence. But then again,
maybe this thread has gone on long enough.

			-Kartik


--
internet# find . -name core -exec cat {} \; |& tee /dev/tty*
subbarao@{phoenix or gauguin}.Princeton.EDU -|Internet
kartik@silvertone.Princeton.EDU (NeXT mail)       -|	
SUBBARAO@PUCC.BITNET			          - Bitnet

dik@cwi.nl (Dik T. Winter) (02/05/91)

In article <13615@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
 > From article <27A84C5C.24EF@tct.uucp>, by chip@tct.uucp (Chip Salzenberg):
 > > This anecdotal evidence proves nothing.  Anyone who changes
 > > environments will notice only the features missing from their old
 > > environment, since the new and potentially useful features aren't yet
 > > a part of their work patterns.  Thus, initial reactions to an
 > > environment change will almost always be negative.  No surprise here.
 > Exactly.  Which is why the part of my article that you _cut_ is
 > relevant here.  The people I'm talking have used _many_ different
 > systems and have switched many times.  They _know_ what's involved
 > in moving to a new system.
I thought I had also some exposure (routinely: NOS/BE, NOS/VE, VM/CMS,
AOS/VS, UNIX Sys V, UNIX BSD, MACOS; routinely in the past: SCOPE, VSOS, NOS,
MVS, COS; occasionaly: SXOS, VMS, DOS); but I disagree.  Background:
numeric work mainly (still doing a lot in Fortran, and also C, also used/using
Ada, Algol 68, Algol 60, Pascal, etc.)  My experience is that the learning
curve mostly depends on the online help you get.  This varies from absent
(SCOPE, NOS/BE etc.) to abundant (IBM RS6000 AIX).  In both cases you find
nothing.  The major part of an OS is to give a good facility to help finding
solutions to your problems.  UNIX's 'man -k' helps, but is in many cases not
good enough.  The quality is very vendor dependent.  But I do not know what
the relevance is to architecture.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

dik@cwi.nl (Dik T. Winter) (02/05/91)

In article <13623@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
 > > [...]                                           It is not impossible
 > > to create a version of UNIX with everything you mention (and it would still
 > > be Unix).
 > 
 > In what way?  By that logic, I'm still really a lemur-like creature which
 > lived alongside the dinosaurs.
Surprise: this is the first time I needed a dictionary reading a message on the
news.  Hornby's Advanced Learner's Dictionary (yes, I think I am advanced):
lemur n., nocturnal animal of Madagascar, similar to a monkey but with a
	  foxlike face.
Apparently it still exists, contrary to the dinosaurs.

To the point: the UNIX philosophy is to have a lot of small utilities that
cooperate together to do what you want.  In some cases you need kernel
support, in others not.  To review two of the complaints:
1.  Asynchronous I/O.  Provided in some versions.  Not in others.  As somebody
    from SGI has pointed out, it does not always help.
2.  Reconnecting to a current session.  Not present in current version, but
    there is a package you can use to do just this.  (Dan Bernsteins pty
    package; he occasionally plugs it in some other newsgroups.)
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

ksand@Apple.COM (Kent Sandvik) (02/05/91)

In article <1991Jan30.153036.25723@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
>
>But that is NOT true: there is no way on a Mac (using Finder, not MPW)
>to go from one open folder, back through the root directory,
>and up another multilayered tree into a presently closed directory
>directly, in one step.  (cd ..\..\graphics\Macpaint\dirty pictures (please
>note that Mac folder names can have spaces in them))

There are utilities that keeps a list of most common used folders,
with a snap you could move from one folder to another.

>This is the most basic, absolutely fatal, flaw of the Mac idea.

Well, I would not agree on that... And prepare for the worst, because
the folder idea will be common in the UNIX and PC world as well.

Kent Sandvik



-- 
Kent Sandvik, Apple Computer Inc, Developer Technical Support
NET:ksand@apple.com, AppleLink: KSAND  DISCLAIMER: Private mumbo-jumbo
Zippy++ says: "Read my lips, no more C++ syntax..."

rex@cs.su.oz (Rex Di Bona) (02/05/91)

In article <13615@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
> I decided I'd rather learn to get some _real_ work done.
> 
> J. Giles

Easy, don't post so much news :-)

--------
Rex di Bona (rex@cs.su.oz.au)
Penguin Lust is NOT immoral

anthony@uarthur.UUCP (T. Anthony Allen) (02/06/91)

I have been following this religous debate and while there have been a few
interesting points, much of discussion is spleen venting.

> I gave them as a data point.  My judgement of UNIX is my own.  About
> 6 years ago (when I first got my workstation), I spent lots of time
> learning UNIX.  I got to be fairly good.  Fortunately, most of that
> garbage has now faded from memory.  However, since joining this
> discussion, a lot of UNIX supporters have sent me examples of stuff
> to "prove" how powerful UNIX is.  These examples have certainly been
> enough to refresh my memory: they all do something trivial or useless,

This judgement is hardly unbiased and is unfair in implying
that Unix is not for people doing real work. We use Unix for real work and
choose it for many tasks over IBM or UNISYS mainframe operating systems, VAXen,
and yes, even over DOS. All the tools that J Giles derides come in handy.

The original posting that started this flammage unflatteringly contrasted Unix
to DOS as I recall. The majority of my coworkers doing real work (word
processing, light spreadsheet stuff) use DOS and are as happy as pigs in mud.
More power to them, J Giles is correct in asserting that Unix with its
admittedly arcane syntax is not for them. However, the value of DOS diminishes
dramatically with the complexity of the real work and with the complexity of
the DOS environment (networking, combinations of all those TSRs and user
friendly shrink wrapped packages).

I wouldn't use DOS in a fit, preferring Unix, even though I do not enjoy the
wierdities of Unix syntax any more than JG. Nevertheless, I think DOS provides
a __very__ useful contrast to Unix just because DOS does so little. It may be
a good example (well an example, anyway) of a prototype microkernal
architecture that has succeeded just because it does not get in the way. I am
not too familiar with the microkernal based operating system research but
perhaps the success of DOS is an indication that a deminimus (okay, I can't
spell and don't know latin) operating system designed for an environment more
complex than a single user workstation might provide the best of both Unix and
DOS. Such an operating system ought to be more like Unix in that it would work
on more than one computer architecture and to be as successful as DOS it will
have to be as cheap or better yet, in the public domain.

Can we consider such a beast? To make the discussion more relevant to
architecture, how would a minimalistic (I gave up on latin) operating system
work in multiprocessing machines or better yet for loosely coupled processers
linked by a network. Much of the Unix bloat was introduced for network support.
Can a micro kernal based operating system avoid a similar fate.

Tony Allen
The World Bank

chip@tct.uucp (Chip Salzenberg) (02/06/91)

According to jlg@lanl.gov (Jim Giles):
>For example, every security person I've ever talked to made the comment
>that SUID (set UID) programs are an inherent security flaw.

Ah, you've never talked to one who knows UNIX well.  A pity.

>... all the misinformation about how "wonderful" UNIX is.
>I think all _that_ is wasted bandwidth.  Somebody has to counter it.

For some reason, the phrase "tilting at windmills" comes to mind.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "Most of my code is written by myself.  That is why so little gets done."
                 -- Herman "HLLs will never fly" Rubin

chip@tct.uucp (Chip Salzenberg) (02/06/91)

[ Followups to alt.religion.computers; this isn't
  comp.arch fodder any more. ]

According to jlg@lanl.gov (Jim Giles):
>The people I'm talking have used _many_ different systems and have
>switched many times.  They _know_ what's involved in moving to a new
>system.  ...  The conclusion is that UNIX does _not_ have sufficient
>capability to offset those features it lacks.

That conclusion would be meaningful if we knew to what purpose the
UNIX computers were being used.  UNIX is not right for all tasks.
Those who claim so have something in common with those who claim the
opposite: prejudice.

>But, your point is quite accurate and explains why there's so much
>resistence to change from UNIX users.

I do not desire to defend UNIX users, any more than I desire to attack
users of other systems.  It is the UNIX system itself that I wish to
defend against charges of being underpowered for serious use.

UNIX is dated.  But its tool-based approach has proved successful for
me in my work.  I cannot give any higher recommendation.  If you did
not have a similar experience, why bother trying to draw others away,
when only their own experience will be persuasive?

>... a lot of UNIX supporters have sent me examples of stuff
>to "prove" how powerful UNIX is ... they all do something
>trivial or useless ...

An example may be trivial; but that does not necessarily imply that
the thing being exemplified is trivial or inconsequential.  Examples
are, of necessity, limited in scope, since their purpose is not to
accomplish a goal but to illustrate an approach.  If the approach thus
illustrated does not appeal to you, then make that point; but remember
that it is nothing more or less than a matter of taste.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "Most of my code is written by myself.  That is why so little gets done."
                 -- Herman "HLLs will never fly" Rubin

jlg@lanl.gov (Jim Giles) (02/06/91)

From article <1991Feb4.234115.5334@murdoch.acc.Virginia.EDU>, by gl8f@astsun7.astro.Virginia.EDU (Greg Lindahl):
> [... something I said ...]
> This is a perfect example of a religious argument -- making broad
> generalizations. This isn't architecture. Take it to
> alt.religion.computers.

I didn't see you make this response to the person _I_ was responding
to who  said that non-UNIX users were afraid (or lazy) to learn
something new.  So, I take it that _my_ response was religious
(even though it was an accurate description of the UNIX supporters
I know and was not intended as a generalization except as a statistical
inference) and _their_ comments were _not_ religious.  Or, is it only
those people who oppose UNIX that you want to see leave the discussion?

Anyway, the recent posting of recomendations for the use of this
newsgroup implied that discussing system archetecture was an appropriate
topic.  Anyway, I _always_ post followups to the _same_ newsgroup
as the originator of the thread.  If this is an inappropriate topic,
take it up with those who began it - I am only responding.

J. Giles

jlg@lanl.gov (Jim Giles) (02/06/91)

From article <5934@idunno.Princeton.EDU>, by subbarao@phoenix.Princeton.EDU (Kartik Subbarao):
> I still don't understand you Jim.  [...]
> [...]              and noone here is saying that all other operating
> systems are evil.

You haven't been reading my mail.  I have.  A _lot_ of people have
been saying that all other operating systems are evil.

J. Giles

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/06/91)

In article <27AF17B9.72E2@tct.uucp>, chip@tct.uucp (Chip Salzenberg) writes:
  
> According to jlg@lanl.gov (Jim Giles):
> >The people I'm talking have used _many_ different systems and have
> >switched many times.  They _know_ what's involved in moving to a new
> >system.  ...  The conclusion is that UNIX does _not_ have sufficient
> >capability to offset those features it lacks.
 
> That conclusion would be meaningful if we knew to what purpose the
> UNIX computers were being used.  UNIX is not right for all tasks.
> Those who claim so have something in common with those who claim the
> opposite: prejudice.

Computers are typically not obtained for single purposes.  Both the hardware
and software should be designed for versatility.  Certainly the BSD version
of UNIX was intended for the extremely varied university environment.  This
ranges from simple ASCII editing to powerful number crunching, and everything
in between and even not covered by that range.

In addition, UNIX is a monster.  There are things normally included in UNIX
which should be add-ons, and not part of an operating system.  There is the
mistaken view that hardware should be designed to particular languages, and
never mind that some programs may be many times slower because of the lack of
particular instructions.  There is far more concern with "Braille optimization"
and never mind what the sighted person can do.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/07/91)

In article <5275@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:

|                                                                There is the
| mistaken view that hardware should be designed to particular languages, and
| never mind that some programs may be many times slower because of the lack of
| particular instructions.  There is far more concern with "Braille optimization"
| and never mind what the sighted person can do.

  This implies that there is some computer which can't run UNIX because
it has efficient instructions, or that some hardware company has left
out instructions which would make them many times faster. Please tell us
which instructs must be left out for UNIX, or which vendors you think
are so stupid.

  Every CPU I've ever seen has at least a few instructions which are not
used by any distributed part of UNIX. These are available for use by any
compiler, application, or through assembler.

  If there is some example of what you mean, could you show it? I have
no idea what you're talking about, other than in the abstract.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

jerry@TALOS.UUCP (Jerry Gitomer) (02/07/91)

hrubin@pop.stat.purdue.edu (Herman Rubin) writes:

: There is the
:mistaken view that hardware should be designed to particular languages, and
:never mind that some programs may be many times slower because of the lack of
:particular instructions.  There is far more concern with "Braille optimization"
:and never mind what the sighted person can do.
:--

Sad to say this will remain the case as long as "important"
prospects make computer system buying decisions based on
benchmarks written in the <supply your own
favorite/least-favorite> computer language.

Having worked for a couple of hardware vendors I can attest
to the fact that benchmark performance considerations biased
the design of some or our systems.


-- 
Jerry Gitomer at National Political Resources Inc, Alexandria, VA USA
I am apolitical, have no resources, and speak only for myself.
Ma Bell (703)683-9090      (UUCP:  ...{uupsi,vrdxhq}!pbs!npri6!jerry 

jbuck@galileo.berkeley.edu (Joe Buck) (02/07/91)

In article <977@TALOS.UUCP>, jerry@TALOS.UUCP (Jerry Gitomer) writes:
|> Having worked for a couple of hardware vendors I can attest
|> to the fact that benchmark performance considerations biased
|> the design of some or our systems.

I certainly hope so.  What else would you want to base your designs
on other than on quantitative data on how different decisions
affect performance?  Assuming you have a well-designed benchmark
suite, of course, and your benchmark programs are real -- that is,
they accept data and produce output, without dummy loops that can
be optimized away or fakery designed to defeat optimizing compilers.

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (02/07/91)

Followups to alt.religion.computers, as this stuff shouldn't be diluting
comp.arch.

In article <27A97D37.4346@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
> According to jlg@lanl.gov (Jim Giles):
> >... asynchronous I/O ...

Yes, Jim, UNIX in general does not have truly asynchronous I/O. Why not?
Because it simply doesn't matter on machines smaller than your Crays.
The worst overhead we get for I/O buffering on the fastest machine here
is 5-10%; it's negligible on anything smaller than a big Sun.

> >... automatic crash recovery ...
> >... error recovery ...

Checkpointing handles these problems. The main reason I wrote my poor
man's checkpointer (pmckpt, available via anonymous ftp to
128.122.128.22, new version appearing soon) was because I was so sick of
hearing you rave that UNIX can't checkpoint files. It can, and I wish
you would stop.

> >... reconnect ...
> These are *kernel* features, unrelated to the UNIX tool philosophy.
> (Though they can sometimes be emulated incompletely; Dan Berstein's
> pty tool allows reconnection after hangup.)

Actually, to give credit where credit is due, Steve Bellovin designed
the UNIX session manager, and pty's session management facilities are
only slightly more general. The session manager provides reconnection
facilities more powerful than in VMS or any other widely used operating
system. I disagree with Chip's statement that reconnect is a kernel
feature; the fact that it can be implemented at the user level is a
perfect example of the tool philosophy.

---Dan

chip@tct.uucp (Chip Salzenberg) (02/08/91)

According to hrubin@pop.stat.purdue.edu (Herman Rubin):
>There is the mistaken view that hardware should be designed to particular
>languages, and never mind that some programs may be many times slower
>because of the lack of particular instructions.

Machines have been designed for efficient execution in the most common
cases.  If the most common cases are compiled C and Fortran programs,
optimizing the hardware for those cases is only natural.

Remember, Herman, your instruction mix is radically atypical.

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/08/91)

In article <27B19A39.321E@tct.uucp>, chip@tct.uucp (Chip Salzenberg) writes:
> According to hrubin@pop.stat.purdue.edu (Herman Rubin):
> >There is the mistaken view that hardware should be designed to particular
> >languages, and never mind that some programs may be many times slower
> >because of the lack of particular instructions.
> 
> Machines have been designed for efficient execution in the most common
> cases.  If the most common cases are compiled C and Fortran programs,
> optimizing the hardware for those cases is only natural.
> 
> Remember, Herman, your instruction mix is radically atypical.

It is atypical only in that I have a fair idea of what hardware instructions
can do and I can see how to use them.  The person taught programming by
learning Fortran or Pascal or C cannot possibly see this, and as is often
the case in other fields (mathematics and statistics in particular), knowing
procedures makes understanding far more difficult, at least.

Do you mean to tell me that computations involving both integers and 
floating-point numbers are not important?  Or that dividing floats by
floats, obtaining an integer quotient and a floating remainder, likewise?
That particular step is the first step of any trigonometric or exponential
function computation when it is not known in advance that the argument is
small.  There are other periodic and related functions for which this is
useful.  It would also speed up interpolation, etc.

On a slightly different, but related, tack, posters have been asking about
JOVIAL.  One of my late colleagues, who worked on it, told me that when
the top-notch programmers found that assembler was desirable, the language
people tried very hard to produce a fix making that particular use of
assembler unnecessary.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

jmaynard@thesis1.hsch.utexas.edu (Jay Maynard) (02/09/91)

In article <1991Feb02.112415.6180@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes:
>Unix doesn't have automatic crash recovery, *in general*, because there is
>no standard way to do it.  Note that SysV has a SIGPWR, but I don't think
>anyone actually uses it.  However, it shouldn't be too difficult to hook up
>a UPS to a unix box, and write a driver that gets a signal from the UPS and
>puts a copy of system memory onto disk; all you need to do, then, is add an
>option to the startup sequence to "recover," and, again, you're set.

Actually, the NCR Tower series (at least my XP does this) does use SIGPWR.
It has a battery backing up main memory, and if power is lost, as long as
the battery can keep the memory alive, recovery from an outage is simple:
the power recovery routine reloads any code running in intelligent
peripherals (which aren't battery-backed), performs some miscellaneous
cleanup, and then resumes where execution left off. The first time I knocked
the plug out of the wall at home, I plugged it back in, the machine ran
through its power-up diagnostics, told me it was beginning power fail
recovery, told me it was reloading the serial I/O board, and then apparently
hung. After a while, I got impatient, and hit enter - and was greeted by
the shell prompt, just as though nothing had ever happened.

No, Unix doesn't do power fail recovery, does it?

-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL | Never ascribe to malice that which can
jmaynard@thesis1.hsch.utexas.edu  | adequately be explained by stupidity.
"Today is different from yesterday." -- State Department spokesman Margaret
Tutwiler, 17 Jan 91, explaining why they won't negotiate with Saddam Hussein

pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/09/91)

On 7 Feb 91 14:42:08 GMT, brnstnd@kramden.acf.nyu.edu (Dan Bernstein) said:

brnstnd> Followups to alt.religion.computers, as this stuff shouldn't be
brnstnd> diluting comp.arch.

But this stuff should instead, because probably by mistake :-) a
technical point on systems architecture is raised here.

brnstnd> In article <27A97D37.4346@tct.uucp> chip@tct.uucp (Chip
brnstnd> Salzenberg) writes:

chip> According to jlg@lanl.gov (Jim Giles):

jlg> ... reconnect ...

chip> These are *kernel* features, unrelated to the UNIX tool philosophy.
chip> (Though they can sometimes be emulated incompletely; Dan Berstein's
chip> pty tool allows reconnection after hangup.)

brnstnd> Actually, to give credit where credit is due, Steve Bellovin
brnstnd> designed the UNIX session manager, and pty's session management
brnstnd> facilities are only slightly more general. The session manager
brnstnd> provides reconnection facilities more powerful than in VMS or
brnstnd> any other widely used operating system.

I agree with the claim, but let me make a plug for Screen by Laumann
which is another thingie that allows reconnection and does session
management. I am just a satisfied customer.

brnstnd> I disagree with Chip's statement that reconnect is a kernel
brnstnd> feature; the fact that it can be implemented at the user level
brnstnd> is a perfect example of the tool philosophy.

Here is the architectural point: The layering of functionality between
hardware, kernel, and user process is the essence of architecture, as it
defines the surfaces which we use for multiplexing things.

Now, and this is another example of billjoysm versus engineering, things
like ptys, sockets and job control, that allow you to implement sessions
management, are simply wrong, and they should be *neither* in the kernel
nor in a tool. The fact that you *can* implement as a tool uner BSD,
using incredible contortions, means that not all is lost, thankfully.

But it remains true that the basic problem is that Unix does not have an
architecturally sound uniform referent; file descriptors/pathnames are
its main naming technology, and unfortunately the interface to a tty
file descriptor is not the same as the interface to a file file
descriptor or a pipe file descriptor.

Ptys exist *only* to give you pipes that look like, on one end, ttys.
Similarly job control. Also, the only way under traditional Unix to add
new abstraction modules, type managers, has been to add device drivers.

There are several remedies to these problems; some have been tried are:

	Edition > 7 or System V.4 or Plan 9

One uses streams (or STREAMS) as uniform referent throughout, that is
file descriptors that can be flavoured dynamically in any way; so you
can create a pipe and flavour it as a tty, obviating the need for
specific pty code. Streams can lead to filesystems, which can provide
objects with arbitrary semantics. You can do session management by
adding a new filesystem type, one in which sessions are directories and
processes in the session files under it, for example.

	4BSD

The original design of BSD was pure billjoysm. The final design was
heavily influenced I guess by some capability man at UCB (easy to
name!), and less billjoyist. "wrappers" were introduced in the design,
which were to be software modules that would convert one flavour of file
descriptor (e.g. pipe or socket) to another (e.g. tty). Not exactly
similar to stream modules, but close. "user domains" were introduced to
provide for the definition of new type managers as suer processes
defining new socket protcol families. Neither feature has ever been
implemented, I surmise because both attempted to paper over fundamental
problems of disuniformity and would have required an excessively
convoluted implementation. In the currently implemented crippled BSD we
have one must use a plethora of servers and hacks like ptys.

	Accent/Mach

In accent the uniform referent is the port, or software IPC capability,
which is used throughout. Since user processes can create and distribute
new ports to other processes, any process can become a type server (both
streams and the BSD Unix domain allow this, but it has been rarely
exploited). This approach worked beautufilly and failed to become
popular; Unix programmers seem to thrive on irregularity and ad-hoc'ism.
Mach was reborn as a more Unix compatible, i.e. irregular, design, and
it has succeded.

	MUSS

A very clever and little known design. Again uniform referent is the
port, like in Mach, even if a bit cruder naming structure is used.
Literally everything in the system, including filesystems, processes,
devices, can be accessed as a port (which can be given a symbolic name).
All ports in a network are equally accessible, so two arbitrary
processes, or devices, or whatever can communicate across machine
boundaries. As a rule the machine that drives your terminal is not the
machine where you create your processes and neither is the machine
running the file server.

When a message arrives on a port it contains in its header the
originating port number. Each line you type at a terminal (which is seen
within MUSS as a process) is sent as a message to some port whose
name/number you have given to the terminal driver. For example, suppose
you want to create a session with two shell processes; you would do
something like (I use symbolic port names thoughout), if you are using
terminal 'tty2' on machine 'd':

	***M login/a		# connect to proc 'login' on machine 'a'
	pcg [pass] shell pcg1/b	# create shell process on 'b' called 'pcg1'
	created: pcg1/b		# reply message from 'login/a'
	pcg [pass] shell pcg2/c	# create shell process on 'c' called 'pcg2'
	created: pcg2/c		# reply message from 'login/a'

	***M pcg1/b		# connect to process 'pcg1/b'
	host			# send a command line to the shell on it
	b			# reply to previous line

	***M pcg2/c		# switch to process 'pcg2/c'
	host			# same command line to the shell in it
	c			# reply to previous line

	***M a *console		# switch to the 'console' device port on 'a'
	Pls mount pcg033 on b	# ask the operator to mount a tape on 'a'
	Tape mounted (OP)	# operator replies after doing a '***M d *tty2'

	....			# further work


	***M b pcg1		# switch back to 'b' 'pcg1'
	quit			# terminate the process and go home

At this point you have left 'pcg2/c' lying around; the next day from any
terminal you can do '***M pcg/c' straight away and reconnect to it.
There is no need for any code to create sockets, ptys, handle STOP and
CONT signals; it is all free thanks to having chosen a clever
*architectural* structure, one based on the 'everything is a process
accessible as a port of exactly the same flavour everywhere' principle
instead of 'many things are a file/file descriptor of which there are
many different flavours'.  MUSS is a bit crude (for example, little
authentication is provided by default), and one can easily improve on it
(hindsight!), but it is sound.

It requires somebody with the talent and stubborness of Dan Bernstein or
similar to write something like 'pty' which takes several hundred lines
under BSD, but under MUSS anybody can achieve the same effect in no
lines at all just because things have been designed right. Accent is not
as simple as MUSS, but at least it is vastly more flexible than either
mainstream Unix.

I think there are two clear lessons here: it is much better to get your
architecture right, and commercial success does not depend on it at all.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

yodaiken@chelm.cs.umass.edu (victor yodaiken) (02/09/91)

In article <BBC.91Feb8091541@sicilia.rice.edu> Benjamin Chase <bbc@rice.edu> writes:
>[Warning: This is just article #N in the USENET series of
>_Replies_To_Herman_Rubin.  It is just like all the others.  The
>veteran reader may safely skip it.  However, Herman _should_ read it.]
>
>hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
>
>>That particular step is the first step of any trigonometric or exponential
>>function computation when it is not known in advance that the argument is
>>small.
>
>Providing hardware support specifically for these things is unwise.
>The expected performance benefit does not justify the required
>additional complexity of the hardware.  Trigonometric and exponential
>functions represent a minute fraction of the average instruction usage
>of general purpose computers.  We all understand that you are not
>doing general purpose computing.  We don't care.  We are very sorry
>that your computer usage represents a minority, and thus has less
>clout when it comes time to design hardware, languages, etc., etc.
>

It might be more interesting to have a discussion about the
diffculty of special purpose architectures and the appropriate
achitectures for scientific computation than to have a discussion about
marketing requirements. Programmable architectures, in which users could
configure the machine for a particular algorithm have implications for
architectures, programming languages and operating systems. I know that
there have been some attempts at such architectures (mostly programmable
pipelines), but would like to
hear about current efforts and/or reasons why such designs would be good or
bad ideas. 

mbk@jacobi.ucsd.edu (Matt Kennel) (02/10/91)

In article <27B19A39.321E@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>According to hrubin@pop.stat.purdue.edu (Herman Rubin):
>>There is the mistaken view that hardware should be designed to particular
>>languages, and never mind that some programs may be many times slower
>>because of the lack of particular instructions.

>Machines have been designed for efficient execution in the most common
>cases.  If the most common cases are compiled C and Fortran programs,
>optimizing the hardware for those cases is only natural.
>
>Remember, Herman, your instruction mix is radically atypical.

I think Mr. Rubin's "beef" arises from a difference in computing cultures.
As far as I can discern from his previous postings, he regrets the lack of
both hardware and software provisions for mixed-mode and multi-precision
integer arithmetic, which he claims to be "trivial" to implement.  I have no
idea if this is true, but it certainly seems likely compared to the massive
effort that designers put into elaborate cache architectures and
multi-processor provisions.  This is the result of project managers at
successful CPU design firms optimizing their resources (i.e. employees' labor)
for the common case.

He writes computer programs to do _mathematics_, whereas the remaining
99% of the numerical programming world writes programs to do _science
and engineering_, where standard floating-point operations are the norm
and hence the object of substantial effort on the part of computer designers.

The only other group I can think of that does large-scale
computing similar to Mr Rubin's is the NSA, in the cryptanlysis field.
Unfortunately for the general public, they have the resources to acquire
their own customized computers, and the power to keep them an exclusive 
secret.  Other than them, number theorists and other mathemeticians
are a 'trivial' market, which is quite unfortunate if you happen to be one.

Mr Rubin's other complaint, that computer languages do not take advantage of
available hardware is also true.  Before you write a rebuttal that says "but
hardly anybody wants to do that stuff anyway", consider vector and matrix
operations.  They're not an intrinsic part of any commonly used language (if
you discount the vapor-Fortran-9X-X-X) but nobody would dare say that
they're not very useful, considering there is significant hardware
provisions in many large computers for accelerated vector operations.  (And
I still need to be convinced that C++ can do them efficiently for both
scalar and vector machines).

Matt K
mbk@inls1.ucsd.edu

Nick_Janow@mindlink.UUCP (Nick Janow) (02/12/91)

xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:

> Manipulating byte oriented data with 64 bit word oriented hardware seems kind
> of slow to me.  It may be a fast processor, but one would think that byte
> oriented instructions would sure make text oriented processing go a lot
> faster.

Wouldn't it be better to have an 8-bit coprocessor for handling text?  It could
be optimized for handling text.  It could have its own memory (8-bit), disk
access and do 64-bt transfers to/from the main memory.

You could even have an 8-bit multiprocessing module for handling text.  That
could handle even large text-processing tasks without causing the user to wait.
:)

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (02/12/91)

It's easy for hardware to give you the top 32 bits of a 32 by 32 integer
multiply, and the operation is very useful in many integer computations.

Fortran and C don't support this operation, so they're a lot less useful
for some computations than they could be. It would be good if high-level
languages provided the high word of a multiply.

Some people think that if Fortran and C don't support an operation, it's
a waste to put the operation into new chips. They're wrong. Just because
language designers make mistakes doesn't mean those mistakes should last
forever.

I think this is Herman's point.

---Dan

xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) (02/13/91)

In article <3182@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>
>  This implies that there is some computer which can't run UNIX because
>it has efficient instructions, or that some hardware company has left
>out instructions which would make them many times faster.
>
(stuff deleted)

How about CRAY?  The last time I checked, it was still strictly a word
machine.  Manipulating byte oriented data with 64 bit word oriented
hardware seems kind of slow to me.  It may be a fast processor, but
one would think that byte oriented instructions would sure make text
oriented processing go a lot faster.

--
David A. Remaklus		   Currently at: NASA Lewis Research Center
Amdahl Corporation				 MS 142-4
(216) 642-1044					 Cleveland, Ohio  44135
(216) 433-5119					 xxremak@csduts1.lerc.nasa.gov

sef@kithrup.COM (Sean Eric Fagan) (02/13/91)

In article <1991Feb12.181720.26323@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
>How about CRAY?  The last time I checked, it was still strictly a word
>machine.  Manipulating byte oriented data with 64 bit word oriented
>hardware seems kind of slow to me.  It may be a fast processor, but
>one would think that byte oriented instructions would sure make text
>oriented processing go a lot faster.

You do not buy a Cray to do text processing; if you *need* to do it there,
you can always stick with 64-bit char's (which have some other advantages
8-)).  If you want a fast, interactive, "normal" machine, go with an amdahl,
or, better yet, a supermicro.

-- 
Sean Eric Fagan  | "I made the universe, but please don't blame me for it;
sef@kithrup.COM  |  I had a bellyache at the time."
-----------------+           -- The Turtle (Stephen King, _It_)
Any opinions expressed are my own, and generally unpopular with others.

barmar@think.com (Barry Margolin) (02/13/91)

In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>Some people think that if Fortran and C don't support an operation, it's
>a waste to put the operation into new chips. They're wrong. Just because
>language designers make mistakes doesn't mean those mistakes should last
>forever.

My guess (based on no hard evidence) is that Fortran and C are used for at
least 75% of systems and scientific programming, and this will almost
certainly be true for the lifetime of the coming generation of processors.
In this case, it makes sense for chips to be designed with those languages
in mind, since they aren't going away soon no matter how many mistakes the
language designers made (technical superiority hardly ever wins in this
business -- consider how many systems running IBM's horrible mainframe OSes
there are).  Yes, that means that the small minority of programs that can
make use of other operations will not be optimized as well.  But if 50% of
all programs double in speed while 10% are halved in speed (I think I'm
exxagerating the numbers in both directions), and the rest stay about the
same, and CPU prices also go down, that's a large overall gain.
--
Barry Margolin, Thinking Machines Corp.

barmar@think.com
{uunet,harvard}!think!barmar

jlg@lanl.gov (Jim Giles) (02/13/91)

From article <1991Feb12.181720.26323@eagle.lerc.nasa.gov>, by xxremak@csduts1.lerc.nasa.gov (David A. Remaklus):
> [...]
> How about CRAY?  The last time I checked, it was still strictly a word
> machine.  Manipulating byte oriented data with 64 bit word oriented
> hardware seems kind of slow to me.  It may be a fast processor, but
> one would think that byte oriented instructions would sure make text
> oriented processing go a lot faster.

Well, many character functions can be carried out on the Cray very
fast.  I've implemented move, translate (like upper- to lower-case),
scan (for first occurrence of a given character), etc..  They all
work on vectors full of packed characters.  The asymptotic speed
(speed of operation ignoring setup time) varies depending on the
operation being done.  Moving characters goes at one-eighth of a
clock per character.  Translating goes at about seven-eighths of
a clock per character.  Seems fast to me.

J. Giles

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/13/91)

In article <4772@mindlink.UUCP>, Nick_Janow@mindlink.UUCP (Nick Janow) writes:
 
> xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:
 
> > Manipulating byte oriented data with 64 bit word oriented hardware seems kind
> > of slow to me.  It may be a fast processor, but one would think that byte
> > oriented instructions would sure make text oriented processing go a lot
> > faster.
 
> Wouldn't it be better to have an 8-bit coprocessor for handling text?  It could
> be optimized for handling text.  It could have its own memory (8-bit), disk
> access and do 64-bt transfers to/from the main memory.
 
> You could even have an 8-bit multiprocessing module for handling text.  That
> could handle even large text-processing tasks without causing the user to wait.

There was a computer a long time ago which had two semi-independent processors.
Now I do not remember the precise word sizes, so no flames on that, please.
The main computer used 64-bit words and did the number crunching.  There was
a 16-bit computer which did control, address preparation (I believe that it
did not use index registers as we know them), etc.  There were interrupts and
waits to coordinate them, and a part of the main memory was used by the control
computer, and thus was accessible by both.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/13/91)

In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

| Some people think that if Fortran and C don't support an operation, it's
| a waste to put the operation into new chips. 

  Making the assumption that (a) a vendor is selling into the
workstation market, and (b) that market is mostly C and FORTRAN, why
would it be a mistake to omit those features, accessible from assembler,
COBOL, and PL/I, and either use the chip space for something useful to
the majority of the users, or save the space and cut the cost of the
chip?

  I don't question that some applications need to do multiprecision
arithmentic, or that these features make it easier, but a vendor is not
out to develop an elegant chip which satisfies every need at the expense
of being competitive in price/performance.

  I've written packages like that on machines with the instructions you
mention, and it's very useful and quite fast. I've also done them in C
for machines which didn't have hardware support, and it's slow but
portable.

  While I regard those features as a plus, they don't count for much to
the average workstation user. Mainframe systems which expect high level
languages which can specify precision will of course provide them.

  Maybe some of the vendors will mention their feelings on why they do
or don't have this, or whiy they have it on some machines and not
others. Anyone want to comment? 
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) (02/14/91)

In article <4772@mindlink.UUCP> Nick_Janow@mindlink.UUCP (Nick Janow) writes:
>
>xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:
>
>> Manipulating byte oriented data with 64 bit word oriented hardware seems kind
>> of slow to me.  It may be a fast processor, but one would think that byte
>> oriented instructions would sure make text oriented processing go a lot
>> faster.
>
>Wouldn't it be better to have an 8-bit coprocessor for handling text?  It could
>be optimized for handling text.  It could have its own memory (8-bit), disk
>access and do 64-bt transfers to/from the main memory.
>
>You could even have an 8-bit multiprocessing module for handling text.  That
>could handle even large text-processing tasks without causing the user to wait.
>:)

Your missing the point.  I wholly agree that it is ludicrous to use
a CRAY for 'text' processing, but compiles, editing, link editing,
grep'ing, etc. are fall under my heading of 'text' processing.  Regardless
of the appropriateness, CRAYs perform these functions.

I agree that an 8 bit (or 16 or 32 bit) micro is better suited for this
type of computing, but the problem remains that the ellusive
'seamless environment' has yet to be created so people use their CRAYs
for edits, compiles, and other UN*X utility programs.

--
David A. Remaklus		   Currently at: NASA Lewis Research Center
Amdahl Corporation				 MS 142-4
(216) 642-1044					 Cleveland, Ohio  44135
(216) 433-5119					 xxremak@csduts1.lerc.nasa.gov

scottl@convergent.com (Scott Lurndal) (02/14/91)

In article <1991Feb12.192725.21029@Think.COM>, barmar@think.com (Barry Margolin) writes:
|> In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein)
writes:
|> >Some people think that if Fortran and C don't support an operation, it's
|> >a waste to put the operation into new chips. They're wrong. Just because
|> >language designers make mistakes doesn't mean those mistakes should last
|> >forever.
|> 
|> My guess (based on no hard evidence) is that Fortran and C are used for at
|> least 75% of systems and scientific programming, and this will almost
|> certainly be true for the lifetime of the coming generation of processors.
Yeah, but systems and scientific programming is probably 25-30% of all 
programming.   The other 70% is COBOL, RPG, 4GL (of which some are translated
into COBOL) and others.

If you look at some of the dedicated COBOL engines (such as the UNISYS V-Series
(old Burroughs Medium Systems) line), you will find that the instruction set
will not support C with any efficiency at all, and FORTRAN is marginal.

The point?  You cannot design a processor which is all things to all people
(the swiss army knife processor - (well the B1900 was a good start)).  If you 
design a processor around any particular language, you have reduced the 
overall usefullness of that processor.   Some of the current risc chips are
quite fast with scientific/systems applications (using C/Fortran/Pascal, et. al.);
but performance falls rapidly when you start running COBOL applications which
require translation from BCD<->binary before and after each arithmetic op.

Now I personally don't like COBOL, but I recognize that there is a tremendous
investment in COBOL programs in industry - and they are not going to go away
tomorrow.

|> --
|> Barry Margolin, Thinking Machines Corp.
|> 
|> barmar@think.com
|> {uunet,harvard}!think!barmar

Scott Lurndal
UNISYS Network Computing Group

dik@cwi.nl (Dik T. Winter) (02/14/91)

In article <1991Feb13.180108.13480@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
 > >xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:
 > >> Manipulating byte oriented data with 64 bit word oriented hardware seems kind
 > >> of slow to me.  It may be a fast processor, but one would think that byte
 > >> oriented instructions would sure make text oriented processing go a lot
 > >> faster.
Perhaps.
...
 > Your missing the point.  I wholly agree that it is ludicrous to use
 > a CRAY for 'text' processing, but compiles, editing, link editing,
 > grep'ing, etc. are fall under my heading of 'text' processing.  Regardless
 > of the appropriateness, CRAYs perform these functions.
 > 
And they perform these functions well (and fast).  Just tried one of my
packages.  (Linecounts approximate:)
	Fortran source:		117 files,	6000 lines
	C source:		  8 files,	1900 lines
	Asembler source:	  1 file,	2000 lines
(of the 117 files Fortran source, 23 are include files used in many of the
other files).  The C source files go through 'sed' before going through the
compiler.  The other sources go through 'sed', 'cpp', 'sed' again before
going through the compiler/assembler.  Result with fully optimized compilation:
	real    0m47.94s
	user    0m22.17s
	sys     0m6.58s
I think this is reasonably fast.  (I noticed some compile times for individual
files; they ranged from 1.3 seconds for a 600 line source file (without
comments) down to 0.039 seconds.)

As somebody here remarked: you do not even have time to pick up your coffee
during a compile.

Consider also saying 'vi' against a 1 Mb file and getting instantaneous
response.  The key feature in many 'text processing' operations is I/O.
And I/O speed on the Cray is adequate.  I did some timings some time ago:
copying 85 Mbyte of data took 6 seconds.  And all data had to be shifted
around in those 64 bit registers!

--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

pmontgom@euphemia.math.ucla.edu (Peter Montgomery) (02/14/91)

In article <1991Feb12.192725.21029@Think.COM> 
barmar@think.com (Barry Margolin) writes:
>In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu> 
> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>>Some people think that if Fortran and C don't support an operation, it's
>>a waste to put the operation into new chips. They're wrong. Just because
>>language designers make mistakes doesn't mean those mistakes should last
>>forever.
>
>My guess (based on no hard evidence) is that Fortran and C are used for at
>least 75% of systems and scientific programming, and this will almost
>certainly be true for the lifetime of the coming generation of processors.
>In this case, it makes sense for chips to be designed with those languages
>in mind, since they aren't going away soon no matter how many mistakes the
>language designers made (technical superiority hardly ever wins in this
>business -- consider how many systems running IBM's horrible mainframe OSes
>there are).  

	Yes, most programs are written in these languages.
As Dan says, the language designers made mistakes.  During one
review period for Fortran 90, I requested an operation which
takes four nonnegative integers a, b, c, n with 
a < n or b < n (c is optional and defaults to 0).  
The requested operation returns q and/or r, where

		a*b + c = q*n + r   and   0 <= r < n

This operation is well-defined mathematically 
(and the definition does not reference the machine architecture), 
with q and r guaranteed to fit in an integer if the constraints are obeyed
[if MAXINT is the maximum nonnegative integer, then
a*b + c <= (n-1)*MAXINT + MAXINT = n*MAXINT, so the quotient cannot overflow].  This operation can be compiled using 32 x 32 = 64-bit multiplication 
and 64/32 = 32 quotient, 32 remainder division on processors supporting 
such instructions (e.g., 68020), and via a call to a library routine 
which operates on one bit at a time on other systems, so compliance 
will not be a burden on any implementor.
The committee declined my request at this time, 
though admitting it had some virtues.  

	Such an operation would benefit many programs I write, 
such as one now running (and taking two months on a NeXT with 68030) to find
solutions of b^(p-1) == 1 mod p^2 for p < 2^32 (this program needs the above 
primitive with n = p).  Since the operation is not available in C,
I have written the critical routines in assembly language.

	Until such primitive operations are added to our languages,
many programs will be coded to avoid them, often costing
labor time if not execution time.  And benchmarks,
which are written in standard Fortran or C, won't be able to
utilize the instructions even if available, unless compilers
are very clever at recognizing weird sequences simulating the constructs.
So if the chip makers look only at the benchmarks, they will omit 
such instructions from the hardware.  Fortunately
some manufacturers recognize the usefulness of double length
integer multiply and related instructions, even if the 
language designers make it awkward to use them.

	The language designers must add the primitives.  
Dan Bernstein, Herman Rubin, Robert Silverman, and I will be happy 
to provide advice on what we need if we know that the designers are listening.
Some, such an an integer GCD (greatest common divisor) function,
belong in the languages but probably not in the hardware.
Others, such as functions returning the truncated base 2 logarithm
or truncated integer square root and remainder, are optional for hardware.
Five years after these are introduced, when the constructs are 
widely available and ``ordinary'' programs (such as binary/decimal conversion
routines) have been (re)written to utilize the constructs, we can
look at (recently written) benchmarks to see which belong in hardware.

--
        Peter L. Montgomery 
        pmontgom@MATH.UCLA.EDU 
        Department of Mathematics, UCLA, Los Angeles, CA 90024-1555
If I spent as much time on my dissertation as I do reading news, I'd graduate.

dennisg@felix.UUCP (Dennis Griesser) (02/14/91)

In article <5934@idunno.Princeton.EDU> subbarao@phoenix.Princeton.EDU (Kartik
Subbarao) continues to slug it oug with Jim over the wonders of UNIX.

Jim complained that most of the examples of the power of UNIX are both trivial
and arcane-looking.

Then Kartik defended a script for renaming files from .pas to .p 
>line noise? I really don't think that:
>  % foreach i (*.pas)
>  ? mv $i $i:r.p
>  ? end
>constitutes any more line noise than your inappropriate comments on choice
>of commands.
>
>How would you like to rename files in your favorite operating system?
>Please tell us.

OK.
  rename ?.pas to ?.p

CP-6.  Can be abbreviated all kinds of terse ways, or spelled out to appear
somewhat English-like.

On this issue, UNIX loses.  The best that can be said is that you can hide the
grungy stuff in an alias or a script file in your "bin" directory.  But when
you open up the script, it does look kinda like line noise in there.

< But this is really more of an OS friendliness issue than architecture. >

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/14/91)

In article <14390@lanl.gov> jlg@lanl.gov (Jim Giles) writes:

| Well, many character functions can be carried out on the Cray very
| fast.  I've implemented move, translate (like upper- to lower-case),
| scan (for first occurrence of a given character), etc..  They all
| work on vectors full of packed characters.  The asymptotic speed
| (speed of operation ignoring setup time) varies depending on the
| operation being done.  

  Without taking away from the value of what you've done, the world
doesn't ignore setup time. Typical things like searching a 20-50
character input line for a colon don't tkae long, and therefore the
setup time is important. That type of thing is a lot more prevalent in
the applications I've seen than scanning through a buffer measured in
kilobytes.

  We have an editor running on a Cray2, and when you do a lot of
searches and stuff it can be notably slow in terms of cpu used. And
while some cray2 only changes in the source could be made, the object of
portable code is to have the compiler do the machine specific stuff.

  Did you code your stuff directly in assembler?
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

bs@linus.mitre.org (Robert D. Silverman) (02/14/91)

In article <2922@risky.Convergent.COM> scottl@convergent.com (Scott Lurndal) writes:
:In article <1991Feb12.192725.21029@Think.COM>, barmar@think.com (Barry Margolin) writes:
 
stuff deleted.

:If you look at some of the dedicated COBOL engines (such as the UNISYS V-Series
:(old Burroughs Medium Systems) line), you will find that the instruction set
:will not support C with any efficiency at all, and FORTRAN is marginal.
:
:The point?  You cannot design a processor which is all things to all people
:(the swiss army knife processor - (well the B1900 was a good start)).  If you 
:design a processor around any particular language, you have reduced the 
:overall usefullness of that processor.   Some of the current risc chips are
:quite fast with scientific/systems applications (using C/Fortran/Pascal, et. al.);
:but performance falls rapidly when you start running COBOL applications which
:require translation from BCD<->binary before and after each arithmetic op.
:
 
Yes. However, if DOUBLE PRECISION integer arithmetic were supported, the
need for BCD would totally disappear. Not only that, the integer arithmetic
would be at least an order of magnitude FASTER. There is no inherent reason
why the dollars and cents calculations [read: extended precison] cannot be
done in integer arithmetic.

It would require that the code emitter of some COBOL compilers be modified
to use integer, rather than BCD instructions, but this is not terribly
difficult to do.

Under these circumstances, both scientific and COBOL users would benefit.
			   ----

:Now I personally don't like COBOL, but I recognize that there is a tremendous
:investment in COBOL programs in industry - and they are not going to go away
:tomorrow.
 
No COBOL programs would change -- only the compilers would change.

--
Bob Silverman
#include <std.disclaimer>
Mitre Corporation, Bedford, MA 01730
"You can lead a horse's ass to knowledge, but you can't make him think"

xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) (02/14/91)

In article <2933@charon.cwi.nl> dik@cwi.nl (Dik T. Winter) writes:
>In article <1991Feb13.180108.13480@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
> > >xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:
(stuff deleted)
> > Your missing the point.  I wholly agree that it is ludicrous to use
> > a CRAY for 'text' processing, but compiles, editing, link editing,
> > grep'ing, etc. are fall under my heading of 'text' processing.  Regardless
> > of the appropriateness, CRAYs perform these functions.
> > 
>And they perform these functions well (and fast).  Just tried one of my
>packages.  (Linecounts approximate:)
>	Fortran source:		117 files,	6000 lines
>	C source:		  8 files,	1900 lines
>	Asembler source:	  1 file,	2000 lines
>(of the 117 files Fortran source, 23 are include files used in many of the
>other files).  The C source files go through 'sed' before going through the
>compiler.  The other sources go through 'sed', 'cpp', 'sed' again before
>going through the compiler/assembler.  Result with fully optimized compilation:
>	real    0m47.94s
>	user    0m22.17s
>	sys     0m6.58s
>I think this is reasonably fast.  (I noticed some compile times for individual
>files; they ranged from 1.3 seconds for a 600 line source file (without
>comments) down to 0.039 seconds.)
>

Without a doubt, CRAY's are fast.  The original intent of this thread was
a question regarding instructions missing from an architecture that if
present would enable UN*X to run faster.  I offered CRAY as an example
of an architecture that was missing byte oriented operations that if present,
might make UN*X (kernel and utilities) run faster.

However, the thrust of RISC, as I understand it, is to simplify the
architecture thus enabling development of very fast fundemental elements.
While the CRAY is hardly RISC, I will concede that introduction of
byte addressing and byte instruction might very well have an inverse
effect on the overall speed of the processor.  Since people who have CRAYs
are interested in speed, the trade-off may not be worth it.

Now an architecture question.  It seems kind of silly to run certain
utilities on a CRAY.  Even though it can execute a compile or grep or
edit or etc. very fast, the vector unit sits idle for the time the
processor spends performing these functions.  Probably worst of all is
the interupt rate generated by some of these functions.  Especially since
the CRAY has one of the worst process context switch times in the industry.
Is this a necessary evil in order to get effective use of a CRAY or
wouldn't offloading this work to more 'appropriate' (the definition is
left to the reader) platforms within that ellusive seamless environment
be better?

--
David A. Remaklus		   Currently at: NASA Lewis Research Center
Amdahl Corporation				 MS 142-4
(216) 642-1044					 Cleveland, Ohio  44135
(216) 433-5119					 xxremak@csduts1.lerc.nasa.gov

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (02/14/91)

On 14 Feb 91 15:37:47 GMT,xxremak@csduts1.lerc.nasa.gov (David Remaklus) said:

David> Now an architecture question.  It seems kind of silly to run
David> certain utilities on a CRAY.  Even though it can execute a
David> compile or grep or edit or etc. very fast, the vector unit sits
David> idle for the time the processor spends performing these
David> functions. [....]

David> Is this a necessary evil in order to get effective use of a
David> CRAY or wouldn't offloading this work to more 'appropriate'
David> (the definition is left to the reader) platforms within that
David> ellusive seamless environment be better?

In the past when I have talked to supercomputer vendors about this
issue, I have gotten the following three responses. The most important
of the "other utilities" in terms of cpu time used is the Fortran
compiler, so that is what these answers address:

(1) If the supercomputer vendor wanted to move the compiler off of the
machine to a "more appropriate" machine, then who is going to decide
which one?  There are many possibilities:

	-- a scalar front-end mainframe?
	-- the users workstation?
	-- a new, object-code compatible version of the super?

There are serious problems with all of these approaches:

	-- the front-end scalar mainframe may already be busy and will
           likely not have the user-friendly O/S that the users want;
	-- the user's workstations come in a bewildering variety of 
	   cpu types, O/S revision levels, etc;
	-- who is going to pay for the development of a slower,
	   cheaper, object-code compatible front-end?  Will anyone
	   actually purchase it?

(2) The compilers on the supercomputers often use very expensive
algorithms in their optimization stages, simply because the
supercomputer has the horsepower to do it.  Trying to compile a large
package on a 1 MIPS workstation (which was what I had at the time)
could take 30 minutes instead of 30 seconds.

(3) At least on the Cyber 205, I was told (by Neil Lincoln) that the
compiler actually made very heavy use of the vector unit.  I think I
remember Neil telling me that he wrote a paper on the topic of vectorized
code optimization in the late 70's or early 80's....

Notice that a number of things have changed in the 4-5 years since I
had these conversations:

(1) With the overwhelming availability of UNIX in the marketplace, the
production of a portable cross-compiler is *much* easier now than it
was 5 years ago.  For example, with the Cray change from the old CFT
compilers (which I believe were written in assembly language) to the
new CFT77 compiler (which I believe is written in an HLL --- Pascal
maybe?) the porting should be very easy.

(2) Instead of a 1 MIPS workstation on my desk, I now have two
machines, with integer SPECmarks of 12 and 14, respectively, so the
hypothetical 30 minute compile jobs might only be 3 minutes -- a far
more attractive number.

Other things are still far from the happy paradise of the "seamless
environment", especially O/S support for process migration and answers
to very difficult questions about how to decide where to run things in
a heterogeneous environment....
--
John D. McCalpin			mccalpin@perelandra.cms.udel.edu
Assistant Professor			mccalpin@brahms.udel.edu
College of Marine Studies, U. Del.	J.MCCALPIN/OMNET

jlg@lanl.gov (Jim Giles) (02/15/91)

From article <3198@crdos1.crd.ge.COM>, by davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr):
> [... vector character stuff ...]
>   Did you code your stuff directly in assembler?

It has been done in Fortran and could be done in C.  The only difference
using a high-level language makes is to increase the required set-up
time for the operations.  That's why I used assembler.  Setup time
can further be reduced by not-vectorizing and working with the
characters one word (8 characters) at a time - this would be the
method of choice for, say, compilers: where the length of the strings
in use is fairly short on average (tokens, etc.).

Actually, it _does_ require some non-portable things.  You need to treat
the character string as numbers (word-size integers are best - with the
characters still packed).  For all the operations except the character
'move' operation, you must assume that the character set is 7-bit ASCII
and that each character is the low order bits of an 8-bit field (you need
the eighth bit to catch carries from arithmetic on the characters).
Finally, the characters need to be packed in the word in a sequential
way: if loading the string "abcd" into your 32-bit word produces a
register with "badc" in it, the algorithm won't work (fortunately,
most machines that do such strange things have character instructions
and don't need this algorithm anyway).

To be sure, if your machine has character instructions, you wouldn't
want to use this technique.  But not all machines have character
instructions - the extra hardware/instruction-set overhead would
make them uncompetitive.  It is a mistake to believe that the only
recourse on such machines is a brute-force unpacking of all the chars,
processing them one at a time, and packing them back up.

J. Giles

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/15/91)

In article <1991Feb12.192725.21029@Think.COM>, barmar@think.com (Barry Margolin) writes:
> In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
> >Some people think that if Fortran and C don't support an operation, it's
> >a waste to put the operation into new chips. They're wrong. Just because
> >language designers make mistakes doesn't mean those mistakes should last
> >forever.
> 
> My guess (based on no hard evidence) is that Fortran and C are used for at
> least 75% of systems and scientific programming, and this will almost
> certainly be true for the lifetime of the coming generation of processors.
> In this case, it makes sense for chips to be designed with those languages
> in mind, since they aren't going away soon no matter how many mistakes the
> language designers made (technical superiority hardly ever wins in this
> business -- consider how many systems running IBM's horrible mainframe OSes
> there are).  Yes, that means that the small minority of programs that can
> make use of other operations will not be optimized as well.  But if 50% of
> all programs double in speed while 10% are halved in speed (I think I'm
> exxagerating the numbers in both directions), and the rest stay about the
> same, and CPU prices also go down, that's a large overall gain.

Even this is misleading.  Although unfortunately it is not completely true,
to a considerable extent the libraries are produced by people with a better
understanding of programming.  The missing instructions are useful in such
things as computing the elementary transcendental functions, or in doing
mixed integer and floating point arithmetic, supported by both of the major
languages above and most others.  Even if a separate arithmetic unit for
doing the good arithmetic is needed, non-business users are likely to be
willing to pay for it.

I doubt that incorporating a larger instruction set need have much effect
in slowing down the existing programs.  The hardware for good integer 
arithmetic and for floating point arithmetic is essentially the same,
with some complications for the floating point part.  Possibly the larger
instruction set would slow things down on the order of 5%.  I suggest that
those who think that not much slowdown occurs in these other problems try
programming them.

Some of the new "hot" hardware does not even support efficient conversion
between integer and floating point.  On the RS/6000, this is a real beast,
involving several instructions and using memory for the transfer.  The time
factor for this is likely to be around 1/10 rather than 1/2.  If multiple
precision arithmetic is needed, the factor is likely to be at least as large,
and the programs using it are mainly using it.  So it is more 50% of the
programs running 5% or less slower, 15% taking several times as long to 
run, and the rest somewhere in between.

--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

lamaster@pioneer.arc.nasa.gov (Hugh LaMaster) (02/15/91)

In article <1991Feb14.153747.26911@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
>In article <2933@charon.cwi.nl> dik@cwi.nl (Dik T. Winter) writes:
>>In article <1991Feb13.180108.13480@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
>> > >xxremak@csduts1.lerc.nasa.gov (David A. Remaklus) writes:
>(stuff deleted)
>While the CRAY is hardly RISC, I will concede that introduction of

Actually, the Cray-1/X/Y *are* RISC machines.  A little too RISC, in fact, 
since the lack of memory mapping hardware increases swapping overhead when
memory contention becomes a problem.  However, the architecture has a very 
simple instruction set, is a load/store machine, many operations take only one
cycle, can be heavily pipelined.  Just because it has a 512 Word Vector
Register set does not make it non-RISC.  Just think of it as a programmable
cache.  Kind of small, really - only 4 K Bytes.  The cost of swapping the vector
registers is a lot less than most people assume.  (For the record, the Cray-2
architecture *is* a little slower at context switching.  It isn't the same
architecture as the 1/X/Y... series).  Anyway, if RISC means anything at all,
surely the simple instruction set and load/store architecture of the Cray make
it a RISC machine.   I don't, by the way, equate RISC with good, decent, 
wholesomeness, as some do.

I note that a previous poster referred to another machine, the Cyber 205,
as non-RISC.  True - the vector instruction set was a very non-RISC memory-to-
memory design.  Just for the record, the *scalar* part of the 205 is a
RISC design.  For anyone who cares to remember :-)  I am still uncertain as to
whether a memory to memory vector processor is a *good idea* or a *bad idea*,
myself.  On the plus side, vector operations are good candidates for memory
to memory operations (think of all those array processors built that way.)
On the minus side, it complicates pipelining, although ETA seemed to finally
get that aspect under control on the (note: virtual memory) ETA-10... 
The Cray, with its vector register load/store design, is a RISC vector machine.
But, I digress.


>Now an architecture question.  It seems kind of silly to run certain
>utilities on a CRAY.  Even though it can execute a compile or grep or
>edit or etc. very fast, the vector unit sits idle for the time the
>processor spends performing these functions.  Probably worst of all is
>the interupt rate generated by some of these functions.  Especially since
>the CRAY has one of the worst process context switch times in the industry.

In fact, the Cray is decently fast at context switch times.  (A few years ago,
it had the record, but that is no longer true.  It isn't very cost effective,
in context-switches/sec/$, but it is fast.  Fast enough, that it isn't a factor
in a normal Cray workload.

>Is this a necessary evil in order to get effective use of a CRAY or
>wouldn't offloading this work to more 'appropriate' (the definition is
>left to the reader) platforms within that ellusive seamless environment
>be better?

The Cray is *very fast* at editing.  It isn't very *cost effective* at editing.
If that is all that you are buying a machine for.  However, consider that it 
*may* be more cost effective to, for example, change a few lines
of source code, than to copy it somewhere else, change it, and copy it back.  
In any case, it really doesn't matter!  Why?  I have looked at this on our Y-MP,
and so have others in theirs.  A *trivial* number of CPU cycles are expended 
in editing, in actual use.

How fast is the Cray at context switches?  I don't have a good program to
measure it on a fully loaded system during production time, but looking at 
"sar" on our machine during heavy interactive times, 
it appears that, through linear extrapolation, the Cray Y-MP can do
between 2000 and 3000 c-s's/sec/CPU.  Or, perhaps I should say *at least*,
because I don't have a good way to see how much kernel overhead is expended
doing various operations.  I can say, though, that the following problem is
much more significant:


Of more importance to Cray efficiency, doing its normal workload, is the 
lack of memory mapping hardware, which forces the kernel to do a lot of 
copying to fit new processes into memory, or to expand the size of existing 
processes.  It also forces the Cray to swap large memory processes much
more frequently than it should, based on memory availability.  This is 
particularly a problem during heavy interactive use.  

Not editing, mind you, which is a trivial overhead.  But, during "real"   :-) 
interactive supercomputing: e.g. interactive graphics output from a numerical 
simulation.  

The Cray needs an MMU!


  Hugh LaMaster, M/S 233-9,  UUCP:                ames!lamaster
  NASA Ames Research Center  Internet:            lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035    With Good Mailer:    lamaster@george.arc.nasa.gov 
  Phone:  415/604-6117       

jerry@TALOS.UUCP (Jerry Gitomer) (02/15/91)

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:

:In article <3159:Feb1213:56:3091@kramden.acf.nyu.edu: brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

:| Some people think that if Fortran and C don't support an operation, it's
:| a waste to put the operation into new chips.

:  Making the assumption that (a) a vendor is selling into the
:workstation market, and (b) that market is mostly C and FORTRAN, why
:would it be a mistake to omit those features, accessible from assembler,
:COBOL, and PL/I, and either use the chip space for something useful to
:the majority of the users, or save the space and cut the cost of the
:chip?

:  I don't question that some applications need to do multiprecision
:arithmentic, or that these features make it easier, but a vendor is not
:out to develop an elegant chip which satisfies every need at the expense
:of being competitive in price/performance.

:  I've written packages like that on machines with the instructions you
:mention, and it's very useful and quite fast. I've also done them in C
:for machines which didn't have hardware support, and it's slow but
:portable.

When I was with Sperry (now Unisys) we compared the performance of two of
our systems, a V77 mini and a small mainframe (a 90/30). The mini was 9
times faster running our standard FORTRAN benchmark and the mainframe much
faster running our standard COBOL benchmark!

The V77 reflected the desires and needs of an arithmetic oriented customer
base beating on the hardware designers for three (hardware) generations
while the 90/30 reflected the desires and needs of business data processors
beating on the designers for even more generations.

The moral of the story is: If a vendor perceives a market segement to be
large enough (this varies from vendor to vendor) they will design a machine
tailored to that market segment.
-- 
Jerry Gitomer at National Political Resources Inc, Alexandria, VA USA
I am apolitical, have no resources, and speak only for myself.
Ma Bell (703)683-9090      (UUCP:  ...{uupsi,vrdxhq}!pbs!npri6!jerry 

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (02/15/91)

...
>	Until such primitive operations are added to our languages,

A major reason for adding operator overloading and modules is to
permit such things to be defined apart from the language itself
(possibly by folks with specialized needs). 

In comp.lang.fortran there has been some discussion of what should
folks start thinking about working on "next" ("fortran90" looking
likely to be an ISO standard before the end of the year). A very
reasonable thing would be a standard module for entertaining
mathematical problems such as this.

--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/15/91)

In article <1991Feb14.153747.26911@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:

| Now an architecture question.  It seems kind of silly to run certain
| utilities on a CRAY.  Even though it can execute a compile or grep or
| edit or etc. very fast, the vector unit sits idle for the time the
| processor spends performing these functions.  Probably worst of all is
| the interupt rate generated by some of these functions.  Especially since
| the CRAY has one of the worst process context switch times in the industry.
| Is this a necessary evil in order to get effective use of a CRAY or
| wouldn't offloading this work to more 'appropriate' (the definition is
| left to the reader) platforms within that ellusive seamless environment
| be better?

  When the seamless environment comes along... We had this argument with
the people who run the Cray2 we use. They really didn't want to support
character at a time interrupts for our portable screen editor. Their
argument was that "it doesn't make good use of the machine." Our reply
was that we weere not interested in making good use of the machine, we
were unterested in making good use of the PhD's who use it. And having
them moving files to a VAX to edit one line, then back to compile, is a
poor use of them.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/15/91)

In article <MCCALPIN.91Feb14105834@pereland.cms.udel.edu> mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:

| (2) The compilers on the supercomputers often use very expensive
| algorithms in their optimization stages, simply because the
| supercomputer has the horsepower to do it.  Trying to compile a large
| package on a 1 MIPS workstation (which was what I had at the time)
| could take 30 minutes instead of 30 seconds.

  I'm told that one of the programs which benefited from the Convex
vectorizing C compiler was the Convex vectorizing C compiler...
optimization should be able to use be parallel and vector technology.

| Other things are still far from the happy paradise of the "seamless
| environment", especially O/S support for process migration and answers
| to very difficult questions about how to decide where to run things in
| a heterogeneous environment....

  I would like to see the workstation slide jobs off onto the SC for
execution, and given enough bandwidth that's possible. The problem is
that the output of these jobs is sometimes very large, and it would have
to come back. If you could put the files on the SC and NFS export them,
without wasting a lot of CPU on the SC, that might be a solution.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
  "I'll come home in one of two ways, the big parade or in a body bag.
   I prefer the former but I'll take the latter" -Sgt Marco Rodrigez

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (02/15/91)

In article <KHB.91Feb14183225@chiba.Eng.Sun.COM> khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) writes:
>....
>>	Until such primitive operations are added to our languages,
>
>A major reason for adding operator overloading and modules is to
>permit such things to be defined apart from the language itself
>(possibly by folks with specialized needs). 
>
>In comp.lang.fortran there has been some discussion of what should
>folks start thinking about working on "next" ("fortran90" looking
>likely to be an ISO standard before the end of the year). A very
>reasonable thing would be a standard module for entertaining
>mathematical problems such as this.
>


Mr. Bierman apparently misunderstands: these thinsg will do NO GOOD
if they are implemented as "modules" written into the old language
which is missing them: if somebody does that, they CAN'T use the
special machine constructs that would make them fast. They COULD
of course be written as functions in assembler, if there happens to be
some reasonably fast way of getting the data in and out in function
form.

Anything CAN be written in C - but it might not be fast enough to
be useful.

Doug McDonald

jlg@lanl.gov (Jim Giles) (02/15/91)

From article <1087@kaos.MATH.UCLA.EDU>, by pmontgom@euphemia.math.ucla.edu (Peter Montgomery):
> [...]
> The requested operation returns q and/or r, where
> 
> 		a*b + c = q*n + r   and   0 <= r < n
> 
> This operation is well-defined mathematically 
> [...]

You will, unfortunately, recieve little sympathy for this kind of
request.  The UNIX community, in particular, will accuse you of
requesting a 'swiss army knife' compiler.  This would be a valid
point if the functionality you requested could be provided
efficiently by composing other features of the language.  What is
really needed is a two-fold approach: 1) the simpler features of
this kind need to be intrinsics of the language (or, at least one
language); 2) there needs to be a way for the user to define
operations which are 'inlined' _before_ optimization - and it should
be possible for these user defined 'intrinsics' to be written in
assembly.  This second requirement opens the possibility of
"supplimentary standards" which define language extensions as
packages of 'inlined' procedures.  Ordinary packages or modules
that languages already have are inadequate to this task due to
the fact that they can't be forced to 'inline' the code, and
they can't be written in assembly to take advantage of machine
specific instructions in ways the compiler writer didn't consider.

J. Giles

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (02/16/91)

>On 15 Feb 91 15:19:10 GMT, mcdonald@aries.scs.uiuc.edu (Doug McDonald) said:

Doug> In article <KHB.91Feb14183225@chiba.Eng.Sun.COM> 
           khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) writes:

>A major reason for adding operator overloading and modules is to
>permit such things to be defined apart from the language itself
>(possibly by folks with specialized needs). 
>
>In comp.lang.fortran there has been some discussion of what should
>folks start thinking about working on "next" ("fortran90" looking
>likely to be an ISO standard before the end of the year). A very
>reasonable thing would be a standard module for entertaining
>mathematical problems such as this.

Doug> Mr. Bierman apparently misunderstands: these thinsg will do NO GOOD
Doug> if they are implemented as "modules" written into the old language
Doug> which is missing them: if somebody does that, they CAN'T use the
Doug> special machine constructs that would make them fast. [....]

This was one of the points that was addressed in the Algol-68 concept
of standard packages.  The idea was that several commonly used modules
would be standardized as part of the language.  They could be defined
in terms of the language, which would provided instant portability,
but might also suffer in performance (as McDonald points out above).

The key, of course, is that the modules are "standard", so that there
exists some incentive for the vendors to produce optimized versions,
which presumably could make use of any hardware or software tricks
that are available.

An analogy would be to require the level 1,2,3 BLAS (Basic Linear
Algebra Subroutines) to be a part of a standard-conforming Fortran
implementation.  They can be ported almost instantly by simply
compiling the Fortran source, or they can be as carefully optimized as
the vendor desires.

The discussion in comp.lang.fortran is based on the idea that if
certain modules come to be in very widespread use, then it might be
appropriate to request that that functionality be included in the next
revision of the base language.  In fact, the process does not need to
be so formal.  If the modules are actually in such widespread use,
then buyers will simply make use of such modules in their
qualification benchmarks and the vendors will quickly respond by
providing optimized versions.  

This is beginning to happen with the Level-3 BLAS and the LAPACK
project even though such libraries do not fit into the language as
easily as modules.  I expect that a 'module' version of LAPACK will be
written very shortly after Fortran Extended becomes available and that
it will be immensely popular.
--
John D. McCalpin			mccalpin@perelandra.cms.udel.edu
Assistant Professor			mccalpin@brahms.udel.edu
College of Marine Studies, U. Del.	J.MCCALPIN/OMNET

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (02/16/91)

>Mr. Bierman apparently misunderstands: these thinsg will do NO GOOD

Mr. McDonald is missing my point; perhaps I should have been more
explicit.

There has been a long running discussion (or set of flame wars ;>) in
this arena which boils down to:

	1)  I have these special things I want.

	2)  designers put in even less support, because
	    usage statistics indicate that they go unused.
	    
	3)  Thus life gets worse and worse.

A standard MODULE would provide the functionality everywhere.

If it gets used, there would be effort made to make the implementation
efficient, it can be hand coded by the vendor, etc.

At the very least, it provides a way for folks to actually measure how
much having special instructions in hw would benefit.

c++, f90  and other languages can be leveraged in this fashion.

--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

chip@tct.uucp (Chip Salzenberg) (02/16/91)

According to jlg@lanl.gov (Jim Giles):
>For all the operations except the character 'move' operation, you must
>assume that the character set is 7-bit ASCII ...

Is that a reasonable assumption nowadays?  I'd not consider a text
manipulation program "working" if it gave incorrect results on
eight-bit characters.

Ob. Arch: Byte manipulation is expensive when shortcuts are
disallowed.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "I want to mention that my opinions whether real or not are MY opinions."
             -- the inevitable William "Billy" Steinmetz

sef@kithrup.COM (Sean Eric Fagan) (02/16/91)

(Hey!  This is actually mildly architectural related!)
In article <3204@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>  When the seamless environment comes along... We had this argument with
>the people who run the Cray2 we use. They really didn't want to support
>character at a time interrupts for our portable screen editor. Their
>argument was that "it doesn't make good use of the machine." Our reply
>was that we weere not interested in making good use of the machine, we
>were unterested in making good use of the PhD's who use it. And having
>them moving files to a VAX to edit one line, then back to compile, is a
>poor use of them.

It's at times like this that I suggest using sam, available from the AT&T
toolchest.  sam consists of two parts, one being the "user-friendly"
interface, the other being the editor.  sam-the-editor is a very powerful
(but very stupid) line-oriented editor (that is, its commands are given a
line at a time); sam-the-user-friendly-interface, however, is a nice
window-oriented editor with a nice graphical interface.  stufi translates
what you want to do into ste commands.

I really like the idea of sam; it lets you use your workstation as
inefficiently as you wish to, yet doesn't require the remote site to jump
through hoops to support it.  (And, yes, some machines *do* have to jump
through hoops to permit character-at-a-time response!)

-- 
Sean Eric Fagan  | "I made the universe, but please don't blame me for it;
sef@kithrup.COM  |  I had a bellyache at the time."
-----------------+           -- The Turtle (Stephen King, _It_)
Any opinions expressed are my own, and generally unpopular with others.

jlg@lanl.gov (Jim Giles) (02/16/91)

From article <3204@crdos1.crd.ge.COM>, by davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr):
> In article <1991Feb14.153747.26911@eagle.lerc.nasa.gov> xxremak@csduts1.UUCP (David A. Remaklus) writes:
> [...]                                They really didn't want to support
> character at a time interrupts for our portable screen editor. Their
> argument was that "it doesn't make good use of the machine." Our reply
> was that we weere not interested in making good use of the machine, we
> were unterested in making good use of the PhD's who use it. And having
> them moving files to a VAX to edit one line, then back to compile, is a
> poor use of them.

Character at a time interrupts are still a poor way of using the
machine.  The same functionality could be provided by polling from the
OS each time it gets control (and, maybe having a timer set so that
the OS got control at least once every 100 milliseconds).  The system
would then do far fewer context switches when interaction was very
busy (most busy Crays give control to the system _far_ more often than
100 ms anyway - and few people can type 10 characters per second - nor
do they expect response to interactive typing any faster than that).

Of course, the timer is just a secondary suggestion.  Your character
can't be processed until your program gets a timeslice - and the
system has to be in control in order to schedule you.  On a busy
Cray, the average (CPU) timeslice is less than a second too.  (The
average memory residence between swaps is less than a minute.)  So,
even without the timer, respomse to your keystroke will, on average,
be as soon as your program next gets control.

J. Giles

scottl@convergent.com (Scott Lurndal) (02/16/91)

In article <1991Feb14.151831.15426@linus.mitre.org>, bs@linus.mitre.org (Robert D. Silverman)
writes:
|> In article <2922@risky.Convergent.COM> scottl@convergent.com (Scott Lurndal) writes:
|> :In article <1991Feb12.192725.21029@Think.COM>, barmar@think.com (Barry Margolin) writes:
|>  
|> stuff deleted.
|> 
|> :If you look at some of the dedicated COBOL engines (such as the UNISYS V-Series
|> :(old Burroughs Medium Systems) line), you will find that the instruction set
|> :will not support C with any efficiency at all, and FORTRAN is marginal.
|> :
|> :The point?  You cannot design a processor which is all things to all people
|> :(the swiss army knife processor - (well the B1900 was a good start)).  If you 
|> :design a processor around any particular language, you have reduced the 
|> :overall usefullness of that processor.   Some of the current risc chips are
|> :quite fast with scientific/systems applications (using C/Fortran/Pascal, et. al.);
|> :but performance falls rapidly when you start running COBOL applications which
|> :require translation from BCD<->binary before and after each arithmetic op.
|> :
|>  
|> Yes. However, if DOUBLE PRECISION integer arithmetic were supported, the
|> need for BCD would totally disappear. Not only that, the integer arithmetic
|> would be at least an order of magnitude FASTER. There is no inherent reason
|> why the dollars and cents calculations [read: extended precison] cannot be
|> done in integer arithmetic.
|>
What about all the data files, and millions of 9-track tapes which contain
BCD data?   There is conversion cost involved here if you cannot just read and
use it as is.  

On architectures which support BCD, the programmer understands the layout
of data items in memory, and via the 88 level items, redeclares (a la union)
a section of memory in another form (e.g. to split dollars and cents, et. al.)
This clearly would require more than just a recompile to work at all.  88-level
items are heavily used in COBOL applications.

|> Bob Silverman
|> #include <std.disclaimer>
|> Mitre Corporation, Bedford, MA 01730
|> "You can lead a horse's ass to knowledge, but you can't make him think"
Scott Lurndal, UNISYS Corporation  - I can't even speak for myself....

rick@pavlov.ssctr.bcm.tmc.edu (Richard H. Miller) (02/16/91)

In article <1991Feb14.151831.15426@linus.mitre.org> bs@linus.mitre.org (Robert D. Silverman) writes:
> 
>Yes. However, if DOUBLE PRECISION integer arithmetic were supported, the
>need for BCD would totally disappear. Not only that, the integer arithmetic
>would be at least an order of magnitude FASTER. There is no inherent reason
>why the dollars and cents calculations [read: extended precison] cannot be
>done in integer arithmetic.
>
>It would require that the code emitter of some COBOL compilers be modified
>to use integer, rather than BCD instructions, but this is not terribly
>difficult to do.
>
>Under these circumstances, both scientific and COBOL users would benefit.
>			   ----
>
>:Now I personally don't like COBOL, but I recognize that there is a tremendous
>:investment in COBOL programs in industry - and they are not going to go away
>:tomorrow.
> 
>No COBOL programs would change -- only the compilers would change.

This is certainly not correct. I don't know how much DP application experience
you have had, but the calculation aspect of business data processing, although
important, is not the primary reason for decimal arithmetic. Most data 
processing applications are designed to read records, do some processing and
report on them. In fact, the majority of data processing is to product reports
and reports tend to be in decimal format. Thus, if you eliminated BCD, the
calculations would be faster, but every time you wanted to put the results
out, you would have to convert the integer values to decimal format, edit the
number into the output format and then output it. [Most architectures which
support decimal also support EDIT instructions]. 

Another important consideration is the fact that many files already are set
up with decimal fields. If you change to compiler to handle integer only, you
will either have to automatically convert input records from decimal to 
integer so you now have conversion->processing->conversion, or you have to
invest a lot of money in doing the systems design and maintenance to convert
all of the BCD fields to either integer or character. [You now are talking
about a fundemental change to the application which requires (or should) 
the services of a systems analyst, programmers, testing and quality
assurance.] 

The bottom line is that programs would change or processing time will go up. 
There actually is a need for certain architectual support for business data
processing for our applications. [We don't make scientific programmers use
decimal arithmetic in their code and they don't make system programmers write
with floating point so don't make us use your approach. Taylor the architecture
to the application if possible and do not assume the same architecture will
work for all.]
 
-- 
Richard H. Miller                 Email: rick@bcm.tmc.edu
Asst. Dir. for Technical Support  Voice: (713)798-3532
Baylor College of Medicine        US Mail: One Baylor Plaza, 302H
                                           Houston, Texas 77030

hrubin@pop.stat.purdue.edu (Herman Rubin) (02/16/91)

In article <MCCALPIN.91Feb15111626@pereland.cms.udel.edu>, mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:
> >On 15 Feb 91 15:19:10 GMT, mcdonald@aries.scs.uiuc.edu (Doug McDonald) said:
> 
> Doug> In article <KHB.91Feb14183225@chiba.Eng.Sun.COM> 
>            khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) writes:

			........................

> This was one of the points that was addressed in the Algol-68 concept
> of standard packages.  The idea was that several commonly used modules
> would be standardized as part of the language.  They could be defined
> in terms of the language, which would provided instant portability,
> but might also suffer in performance (as McDonald points out above).

The hardware of that time had major differences from that of today.  One
of them was that for most of the machines of the time, transfers were the
fast operations, and memory access next, while arithmetic was relatively
slow.  People were still counting multiplication/divisions as the cost
of computing, and largely ignoring memory/register considerations.  The
B5500 was designed as an Algol machine, and had a stack, but essentially
no registers.  Inlining did not pay much.

Times have changed.  On the mainframes I am familiar with, context switch,
and even subroutine call/return, are costly.  Instruction memory is rarely
a problem.  Implementing a hardware operation in software can multiply the
cost by 10 or more.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)   {purdue,pur-ee}!l.cc!hrubin(UUCP)

rob@array.UUCP (Rob Marchand) (02/19/91)

In article <4702@lib.tmc.edu> jmaynard@thesis1.hsch.utexas.edu (Jay Maynard) writes:
>Actually, the NCR Tower series (at least my XP does this) does use SIGPWR.
>It has a battery backing up main memory, and if power is lost, as long as
>the battery can keep the memory alive, recovery from an outage is simple:
>the power recovery routine reloads any code running in intelligent
>peripherals (which aren't battery-backed), performs some miscellaneous

	Yes, I remember being a little bit shocked by my first experience with
	a power outage on the Tower/XP.  Power bump, cycle through the diagnostics,
	and up comes the system.  I was returned to my vi session, still
	in insert mode no less!  Didn't lose a character.  That was one of the
	nice things about using that machine... (although it wasn't overly
	speedy, things like this, and certain sysadmin tasks, were set up 
	reasonably well)

	Cheers!
	Rob
-- 
Rob Marchand                   UUCP  : uunet!attcan!lsuc!array!rob
Array Systems Computing        ARPA  : rob%array.UUCP@uunet.UU.NET
401 Magnetic Drive, Unit 24    Phone : +1(416)736-0900   Fax: (416)736-4715
Downsview, Ont CANADA M3J 3H9  Telex : 063666 (CNCP EOS TOR) .TO 21:ARY001

chased@rbbb.Eng.Sun.COM (David Chase) (02/19/91)

Is anyone besides me somewhat mystified/bemused that a discussion
about computers "for users not programmers" has mutated into a
discussion of BCD trivia and programming languages that don't take
advantage of the splartzflooie instruction?  Anyone care to speculate
on what a Nintendo Game-Boy might look like in five years, or the
effect that BCD might have on this?

David Chase
Sun

khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) (02/19/91)

In article <8141@exodus.Eng.Sun.COM> chased@rbbb.Eng.Sun.COM (David Chase) writes:

...   advantage of the splartzflooie instruction?  Anyone care to speculate
   on what a Nintendo Game-Boy might look like in five years, or the
   effect that BCD might have on this?

How can one by surprised by discussion <de->evolution at the Net ?

Since chips like Swordfish (the new Nsemi 64-bit chip with DSP
onboard; pictures of silicon have been shown, so it probably exists
with at leasta few instructions working ;>) are touted to be such neat
graphics wins....when will we see the toymarket make the transition...
(note that they haven't gone to 32-bit yet ;>).
--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

herrickd@iccgcc.decnet.ab.com (daniel lance herrick) (02/19/91)

In article <MCCALPIN.91Feb14105834@pereland.cms.udel.edu>, mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:
[Discussion of whether supercomputer should do mundane things like]
[compile and edit truncated                                       ]
> (3) At least on the Cyber 205, I was told (by Neil Lincoln) that the
> compiler actually made very heavy use of the vector unit.  I think I
> remember Neil telling me that he wrote a paper on the topic of vectorized
> code optimization in the late 70's or early 80's....
> 
Back around 1975 I drove down from Owosso the Michigan State to hear
a lecture by Neil Lincoln at an ACM student chapter meeting.

He was then working on the CDC Star, serial number one or two was
on the other side of his lab wall.  He had a native Fortran compiler
that he said compiled at [several million lines of source per second -
I don't remember the number].  That speed number impressed me.  It
also impressed the rest of the audience.

The Star was a 256 bit word machine with an ALU at every word.

After a suitable pause for his speed number to sink in, he said, "What
if you could tell your computer to find every plus sign in memory and
replace it with something, and then, in the next machine cycle...."

Several years later, I was working at MDSI and met Neal Faiman who
had been at that same lecture as a student.  He told me that some
people at State had got a copy of Lincoln's compiler and ran it on
a Star simulator that ran on the CDC [3600 maybe] at the school.

The real punch line of this story came when Neal said that Lincoln's
compiler, running on the simulated Star, ran faster than the native
compiler shipped with the 3600.

[Don't bury me under questions, this story comes out of casual conversation
and I don't know things like comparative benchmarks of the generated code.
Lincoln's compiler did generate code for the 3600 so the thing could
have been put to use.]

dan herrick
herrickd@iccgcc.decnet.ab.com