Human-Nets-Request%rutgers@brl-bmd.UUCP (08/09/83)
HUMAN-NETS Digest Monday, 8 Aug 1983 Volume 6 : Issue 44 Today's Topics: Administrivia - Lost Submitter Name, Computers and People - Office Automation (2 msgs) & The Worth of Technology & Icons and User Friendliness ---------------------------------------------------------------------- Date: 7 Aug 83 17:49:56 EDT From: Charles <MCGREW@RU-GREEN.ARPA> Subject: Administrivia In Vol. 6 #42, there appeared a news article on non-ionizing radiation and its possible effects. Due to an error, the 'From' portion of the message was lost. The article was submitted by Lauren Weinstein. ------------------------------ Date: 30 JUL 1983 14:30 From: "TS1::BURROWS Jim Burrows c/o" <DEC-HNT at DEC-MARLBORO" Subject: Man or machine? Date: 27 July 1983 20:14 EDT From: Robert Elton Maas <REM @ MIT-MC> Subject: Secretaries and Managers -- whom to blame I personally take the opposite view. I'm more concerned with the satisfaction of actually getting something accomplished, and the "power trip" in being able to do many things that many others can't do, rather than covering my a-- when those inevitable mistakes occur. Thus I prefer the machine, which I can trust will do exactly what I commanded it to do once it has verified it got the command correctly, to the human, which will often give some ambiguous verification leaving me uncertain whether it (he/she) understood my command or not and whether it (he/she) will do it today or next month or ever. If I insert a word in a file, it's really deleted, but if I ask a human to delete it, well maybe ... maybe not ... Excuse me but if you "insert a word" it most probably isn't "really deleted", unless the machine isn't running up to snuff. That kind of obvious mistake is exactly why many people prefer to trust people, who understand the intent and detect the error, than machines which are unlikely to realise that "insert" means "delete" in certain circumstances or to recognise those circumstances. Me, I trust human beings, and am deeply skeptical of machines. Oh, I love to work with them, and use them. But given the choice, if I have to count on something being done, I'll choose the human. Of course, machines work cheaper, and don't talk back. ------------------------------ Date: 6 Aug 83 21:41:36 EDT From: Mike Zaleski <ZALESKI@RUTGERS.ARPA> Subject: Fear and Loathing of Office Automation Periodically, the issue of whether executives will or won't accept office automation systems and reasons why they will or won't has been discussed on Human-Nets. A recent article in MIS Week (8/3/83, p. 28) discussed this issue, especially with the viewpoint of trying to get employees to accept office automation. I do not remember seeing some of these points and so I present an excerpt from this article: "The question is, then, why do so many white-collar workers furtively - and openly - resist the new technologies that could make their jobs infinitely easier? "The answer, pure and simple, is fear; the clerical worker who fears change, the unknown, and of being de-skilled; the middle manager who fears losing control of his job or department, since total automation means everyone, including the president, will have immediate access to information and will no longer have to call on the middle manager to present it to him; the executive who trembles at the thought of putting his fingers on a keyboard (Me, type?); and those who fear losing thier jobs altogether. "These very real human fears can cause anxiety, depression, job alienation, boredom, low morale, and outright sabotage. One thing they won't cause is productivity." The article also details the (probably ficticious) story of a secretary who, having been office-automated in the "wrong" way (wrong in the sense of the way the article believes it should be pursued), grows increasingly unhappy and finally hands in her resignation (after deleting the company's year-end report. The secretary was unhappy because her work with a word-processor had removed much of the human contact in her work. And although the article does not mention it specifically, the reduction of human contact because of increasing use of office automation may be yet another reason why people resist. -- Mike^Z ------------------------------ Date: 03 Aug 1983 From: "JOHN CROLL at OBLIO c/o" <DEC-HNT@DEC-Marlboro> Subject: Has technology changed humankind for the better? Answering this question would make an interesting book. I think a better way to phrase the question is: How have advances in science in general changed humankind? How did the invention of the printing press change things? How did the rise of the mechanistic view in physics change things? How did the advance of the scientific method change things? How did the rise of the quantum theory change things? Given the state of the world six or seven hundred years ago, before science and technology became something done for its own sake, the answer to the question is obvious (at least to me). I would much rather be sitting at my desk quietly avoiding the legwork necessary to fix the race condition in the driver I just wrote, than spend all day working my ass off just to get barely enough food to keep me alive. Assuming I managed to survive to the ripe old age of 27, of course. Are things better than they used to be? Hell, yes! John ------------------------------ Date: Sat, 6 Aug 83 11:26:23 CDT From: Bob.Warfield <warbob.rice@Rand-Relay> Subject: Re: HUMAN-NETS Digest V6 #42 'I hope we will finally get past this "user-friendly man-machine interface" fad so that we can concentrate on what happens to the information once it is in the computer.' -- Ken Laws This is an interesting philosophy that is prevalent but unproductive. Ken advocates that it would be better to program an administrative assistant rather than a filing cabinet, a position I sympathize with. What he overlooks however, is that programming an administrative assistant is an AI problem and is much more difficult to accomplish than improving the man machine interface on the system. The point I'm making here is that too often projects get started before the proper tools are available. If AI is ever going to be tackled successfully we need the best non-AI tools possible. This means improving the quality of the man-machine interface in order to increase the bandwidth of information transfer between machines and men. Judging from the number of window managers and such available for LISP machines (not to mention that Xerox PARC is no lightweight in AI) I would say that the majority of AI researchers have realized their tools are probably not yet adequate and they are desirous of improvements. The ultimate parody of the situation is just to say, "I don't understand why anyone bothers with programming language research, let's get on and write some AI stuff in assembly language." Bob Warfield ------------------------------ End of HUMAN-NETS Digest ************************