[news.groups] comp.cog-eng

avr@hou2d.UUCP (Adam V. Reed) (07/24/87)

In article <386@sdics.ucsd.EDU>, norman@sdics.UUCP writes:
> Sounds like a typical problem in human factors/ergonomics/
> human-computer interaction: selecting a name.
> Now, it is well known that one should not select design parameters
> simply by thinking about it.  One must either use previously accepted
> standards or do some experiments.

The standards may be de-facto ones: most people in the field refer to
themselves as "user interface developers"; almost none call themselves
"cognitive engineers". Another valid technique for correcting a
defective design is to diagnose the bugs, and fix them. Cognitive
engineering is strongly related to both ergonomics in general, and to
psychology. People working in those fields post in comp.cog-eng
because it is the "closest thing" to what they need but don't have,
namely separate groups for those disciplines. I think that
comp.cog-eng needs to be split into three related but separate groups:

	comp.user-interface
	sci.ergonomics
	sci.psychology

I have added news.groups to the heading.
					Adam Reed (hou2d!avr)

craig@unicus.UUCP (Craig D. Hubley) (07/26/87)

Sort of long... bear with me.

The problem, I think, with the name comp.cog-eng is that it could just
as easily be understood as "the engineering OF cognition", although it
was intended to mean "engineering FOR cognition".  Thus Mr.Harnad's 
cross-posted "sludge".  That former definition could be yet another
synonym for AI.

I don't think it is a good idea to splinter the human factors discussion
across several groups.  It is, after all, a holistic subject and that
should be encouraged.  This, to me, means keeping varied aspects of the
discussion under a net-name.  The creation of a group such as "sci.psychology"
might prove useful, but in the absence of traffic it shouldn't be assumed.
It would also serve to generate traffic on Freud and Jung - fine, but I've
seen no one on cog-eng discuss matters that ought to be moved to such a group.
If someone wants to discuss psychology relevant to human factors issues, of
which there is a great deal, by all means do it in the human factors group.
If there is a token psychologist posting to "comp.user-interface" and a token
designer posting to "sci.psychology", that only makes the situation worse.
Splintering and overspecialization was one of the things that has made human
factors engineering such a poor cousin to other forms of engineering, when
it should have been at the top of the heap.

My solution ?  (Of course, those who post criticisms must post alternatives!
 :-)).  We have the "comp." prefix, why not simply add "for_humans", or some
such, so that the purpose is absolutely clear.  It may not read like other
net names, but then this shouldn't be like other net groups.  It isn't just
USING the media of the computer, it's ABOUT the media and its effects.  Sort
of a meta-group.

I don't think it's possible to misread:  comp.for_humans
					comp.for.humans ?
Wouldn't this solve the problem?
And isn't this what it's about.

I've had my say.  My next post (to whatever is decided) will be about a real
human factors issue.  Maybe those participating in this current debate ought
to do the same.  Nothing gets a group back on track like content.

	Craig Hubley, Unicus Corporation, Toronto, Ont.
	craig@Unicus.COM				(Internet)
	{seismo!mnetor, utzoo!utcsri}!unicus!craig	(dumb uucp)
	mnetor!unicus!craig@seismo.css.gov		(dumb arpa)

hanley@cmcl2.NYU.EDU (John Hanley) (07/30/87)

In corresponding with Craig Hubley (Craig@Hubley.COM) we have agreed that
self-referential newsgroups that exist only for the purpose of debating
what they should be named are just plain silly.  AND, further, Craig
offered to get ball rolling and asked me what I would like to see discussed
here.  SOOoo, being more of a parallel type than a HumIntFacePerson, I have
a parallel processing challenge for ye of net.land:

    Design the ultimate parallel debugging environment for a massively
parallel machine.  Assume a Unix-based environment and conventional
procedural languages (C, fr'instance), and computing resources available
to the debugger comparable to those available to the application being
debugged (i.e., the debugger shouldn't try to allocate more than two or
three more processing elements beyond those used by the application -- it
is unacceptable to use 2N PE's to debug something that runs on N PE's).
Assume any graphics/mouse/pointing device/disk space/OS support/network
resources you find convenient.

I claim this is a good topic to pull comp.cog-whatever out of its slump
because it deals with the crux of designing a good human interface --
organizing huge gobs of data and presenting it to a person on demand in
a form that will aid him in solving a difficult problem (getting a
parallel program to run).

Object-oriented programming is not to be considered (otherwise the topic
would be too broad).  Smalltalk already seems to have a pretty good
debugger, though I am hardly intimately familiar with it.  Feel free
to borrow ideas from it if you think they're good ones that fit well.
Theoretically, anything that's a good debugger for procedure oriented
Lisp or C should be an all right debugger for object oriented Lisp or
C, so upward expansion shouldn't be too awful.  I'd settle for a good
base, initially.

Let me fill you in with a little background information.  Here at the
Ultralabs we have five 8-processor machines (the PE's are 68010's) that
are prototypes for an N-processor Ultra, where N is around 4096 or so.
Within several months a 64-processor machine using 68030's and much
faster FP coprocessors should be built.  All processors share a large,
flat global address space (~1Meg/PE) and have individual private caches
(~32K/PE), so that global memory is used primarily for communicating
among PE's and for reading program instructions.  The cache is big enough
for loops to frequently run entirely within the cache.  Coordination is
done through the atomic operation Fetch&Add.  Three control lines connect
the processors to the memory elements:  READ, WRITE, and F&A.
The really neat thing about the Ultra is that it scales up very nicely because
it has _no_ serial bottlenecks -- no critical sections in the OS (symunix,
for symmetric Unix, as opposed to master/slave) and no waiting on global
memory requests (all processors can read or write or F&A any location, even
multiple PE's to the same location, in _one_ memory cycle).
The debugger, on the other hand, is not nearly so sophisticated.  It is
called pdb (parallel debugger) and is based on sdb.  With all due respect
to its author, pdb has an ugly user interface, largely because I consider
sdb's interface to be ugly.  In contrast, the VAX/VMS debugger in full-screen
mode I would consider to have a reasonably good user interface (it would be
better if setting watch points on variables worked better so you could get
the values of changed variables printed out intermixed with trace output).
Imagine sdb with some extra facilites to handle parallelism (command
enhancements and some extra commands) thrown in, and you'll have a good
picture of pdb.  Basically, any information you want you have to ask for,
and the terminal is modelled as a glass tty (no windowing).  The _good_
thing about pdb is that processes are arranged in groups and are stopped
_as_ _a_ _group_ when a breakpoint or keyboard interrupt is encountered,
so while you're examining variables and so on all the processes cooperating
with the one you breakpointed are also suspended, giving you a static rather
than constantly changing environment in which to poke around to see what's
going on.

My idea of the ultimate parallel debugger would be closer to dbx or the
VMS debugger and would give each process it's own window.  The idea of
process groups would be retained, possibly qualified by a process' state,
as in "I want these processes to stop when this one hits a breakpoint,
but only if they have acquired A-locks.  If they have acquired B-locks,
they should continue on because I'm going to me modifying the variable
they keep looking at in a loop and see what happens."  It would also
probably need to be able to halt all PE's, examine their state, then
let all PE's [simultaneously] execute one instruction, examine state
and print out any variables that changed, and continue to loop.  Higher
levels of granularity would require the programmer to pick a spot (or
spots) in the body of a loop where the breakpoint should happen so
selected variables or all changed variables can be printed out.  As the
code was executing I would be able to see where each process was because
each would display a window full of code with the instruction at the
current PC high-lighted at the middle, and a log of selected variable
changes scrolling in a few lines at the bottom together with windows on
selected variables.  Unfortunately, the relative timing of concurrent
processes is crucially important in uncovering bugs, and it would
almost certainly be messed up beyond all recognition by the time
taken to print all this stuff out.  Hmmm.  Perhaps an initial run could
be done with a fixed overhead routine recording the time and current
PC after each instruction, keeping a voluminous log, and then for the
display run each process would be kept in synch with that log so as to
maintain the original pattern of where PC's are  relative to everyone
else.  Sounds like this ultimate debugger is going to run slower than
snail speed or will require a hardware debugger...  But that's OK, as
long as the debugger can be designed into it.  As a matter of fact,
we're designing a board right now that snoops on the Ultrabus and
records the last million or so addresses seen on it.

Anyway, what do all you human-interface people / cognitive-engineers
think about this?

                   --John Hanley,
 /  /   ____ __  __  System Programmer, Manhattan College [ ..cmcl2!mc3b2!jh ]
/__/ /__ /  /-< /-/  Researcher, NYU Ultracomputer Labs   [  Hanley@NYU.arpa ]

"The Ultracomputer: to boldly go in log N time where no N processors have
 gone before."


(All typographic and logical errors are intentional and are exacerbated by
 the fact that cmcl2 is going down in 2 minutes!)

hanley@cmcl2.NYU.EDU (John Hanley) (07/31/87)

Short & sweet:  I was under duress and being bombarded by "system going down
                in X minutes" messages, I choked.  Sorry.

                Comp.cog-eng people, please edit out "news.groups" from the
                News.groups: field before following up to my pdb post.    --JH