[comp.society] Examples from Scandinavia

sebarber@ATHENA.MIT.EDU (Steve Barber) (05/05/87)

Tony Marriot asks for some examples from Scandinavia describing the use of
"co-determination" in technology design.  Two of the three examples from the
video tape I saw were (I forget the third):

- An aircraft maintenence shop, the computer system in which was redesigned to
  allow the machinists to plan their own work. The results were an increase
  both in productivity and job quality (pretty important for airplane repairs,
  no?), which was attributed to the greater control allowed the workers in
  scheduling and to some notion of increased morale and sense of worth or
  importance.  This last point is not to be overlooked, as anyone who has
  ever managed anyone (especially programmers!) can attest.

- A bank, who while in the process of largely automating away their tellers,
  also developed an intergrated software/computer system (with
  NCR/Scandinavia) to turn the remaining tellers into "customer service
  reps", with more autonomy and authority and could deal with a broader
  range of services and responsibility.

Now, given that this is a rather limited range of examples and that I
know virtually nothing of Scandinavian society, I was taken by this
model of "cooperation" in technological development. Co-operation is
almost euphemistic in this context: what's really going on is that in
the continuing struggle of labor vs. management, labor seems to be a
far more powerful social group there than it is here.  The society
recognizes labor's claims as legitimate, and provides public monies
and institutions (such as laws and a labor-oriented R&D center).
Within companies, union input is necessary for the introduction of any
new "technology" (I wish I knew how they define that). In America,
labor has come up with less power, and has accepted management
determination of the use and form of technologies (computers included)
in exchange for mith
  NCR/Scandinavia) to turn the remaining telre in
that the constant influx of relatively cheap immigrant labor dilutes
the power and solidarity of the workforce in the U.S.

When talking about the appropriateness of automation, there are several
perspectives to consider: that of management, responsible to their
shareholders and/or themselves; labor, responsible to themselves and their
families; and that of the society at large which, depending on your
viewpoint, may be responsible for the well-being of all, or at least
concerned with maximizing productivity. Most automation causes displacement
in the workforce, at least temporarily.  The concern is to balance
management's desire for more productivity with the worker's "desire" for
steady employment.  Maybe we'd be all better off if menial jobs were
automated away so that we could leverage our productivity, and maybe not.

If jobs lost in one sector are not replaced by jobs in another sector, then
people are disemployed, wealth becomes more concentrated, and the society
will destabilize.  If there is a lag in job creation, the same thing can
happen.

My interest in these issues stem from my interest in user-interface
and other software system design.  Most software automates existing
tasks, and it is, in my mind, a duty of the designer to understand
what he is automating and why, so that the "how" is carried out
correctly. To me, a system designed to increase productivity by
de-skilling the worker is bad for the worker, who is de-motivated, and
for management who now has a disgruntled workforce and ultimately
lower quality output.  The basic assumption here (borne out by
history: examples on request) is that technologies don't just spring
into existence, their forms determined by "science", but that they are
expressions of the goals and relative power of those who designed and
implemented them.

I submitted the previous article to let you folks know what people
with similar concerns were talking about, and to provide an
alternative discussion from the dreary "literacy" discussion that has
been going on for over a month.  The major question: Who does the
computerization of society benefit, and why?

-Steve Barber
sebarber@athena.mit.edu or ..!seismo!cbmvax!hutch!barber

(Some further reading along these lines:
 D.F. Noble, "Forces of Production" and "America by Design"; Lewis Mumford,
"Technics and Civilization"; Sherry Turkle, "The Second Self"; Joseph
Weizenbaum, "Computer Power and Human Reason"; Robert Howard, "Brave New
Workplace")

MJackson.Wbst@Xerox.COM (Mark Jackson) (05/13/87)

Steve, while it's true that labor unions in this country have been more
focussed on security and pay than on larger workplace issues this has
not been entirely by choice.  The UAW made a bid for something like
"participative management" I think at Ford in the late 40's or early
50's.  The idea was killed essentially by middle managers, who were
jealous of their own "rights" in the workplace.

One might speculate that the ease with which worker participation can
function in a society is inversely proportional to the degree of
rigidity and overtness of that society's class structure.  Thus England
has *terrible* labor relations, whereas in Scandinavia humanization of
the assembly line has a substantial history.

So that this note not be *entirely* unrelated to "Computers and Society"
(although Dave has been inviting us to comment on Gary Hart, so
obviously his standards are rather loose :-), let me note your closing
comment:  "The major question: Who does the computerization of society
benefit, and why?"  Neil Postman, in /Amusing Ourselves to Death/, has
this to say:

    Although I believe the computer to be a vastly overrated technology,
I
    mention it here because, clearly, Americans have accorded it their
    customary inattention; which means they will use it as they are
told,
    without a whimper.  Thus, a central thesis of computer
technology--that
    the principal difficulty we have in solving problems stems from
    insufficient data--will go unexamined.  Until, years from now, when
it
    will be noticed that the massive collection and speed-of-light
    retrieval of data have been of great value to large-scale
organizations
    but have solved very little of importance to most people and have
    created at least as many problems for them as they may have solved.

Now clearly there is a lot to argue with here (I believe, for example,
that the general inability of humans to form effective, humane
organizations on a large scale is an enormous problem for *everyone*),
but I assume that each of us who reads this digest sees *some* such
outcome as a possibility, and is interested in avoiding that eventuality
if possible.

Mark

sebarber@ATHENA.MIT.EDU (Steve Barber) (05/13/87)

- -
	Thank you for your response, and especially for the Postman
quote.

	If you read D.F. Noble's  Forces of Production or Bob Howard's
Brave New Workplace, they make the point that the U.S. labor unions in 
effect traded off their chance at particpatory management in the middle
decades of this century in exchange for job security for current union
members, mostly out of the fear that their jobs would be otherwise
eliminated through automation. Upon reflection, this action seems like
a massive sell-out on the unions' part, although I must admit I don't
know what I would do if I were in the unions' position.

	In part, this is what is happening in Scandinavia: the unions there,
who are more secure than the unions in the U.S. and who therefore did
not feel compelled to negotiate away their say in corporation management,
are encouraged and required by law be allowed to participate in the
application of new technology.  In the "Computers in Context" video,
one of the examples had the workers participating in the design of a
system which they would use in redefined job roles since their old jobs
(bank tellers) were being automated away.

	The guiding principle in both these cases seems to be that the
unions are trying to cut their losses due to automation.

	It would be difficult to argue that automation in many cases is
not a good thing.  However, if we accept the premise that the increase in
GNP that is due to automation genuinely benefits society (and benefits it
fairly)* in the long run, then there will be always be the short term
disruption when a new technology is introduced (short term in this context
is something like 5 to 20 YEARS for cases with a large enough effect to
worry about from a public policy standpoint).

-Steve Barber

* I find this premise to be problematic at best, but let's let it stand
  so people don't start calling me a Luddite and then ignore the labor issue.