This is an archived page and is no longer updated.
Please visit our current pages at https://rvs-bi.de

Unravelling the Nets: Some observations prompted by Rochlin's study `Trapped In The Net'

Peter B. Ladkin

Article RVS-J-97-05

Contents

Introduction

Gene Rochlin's thesis in his book `Trapped in the Net: The Unanticipated Consequences of Computerization' (Roc97) is that development of computing from stand-alone machines to networks has taken us from use of a technology that answered our questions as they were posed into one that dominates our lives through its complexity, unpredictability and unreliability. Distributed computing was categorised by Leslie Lamport as a situation in which one's work can be interrupted by a machine one didn't even know existed. Rochlin argues that we are giving up more control than we had thought.

Consider the automobile. Cars use roadways to take us from one living space to another. But roadways are ubiquitous, and are coming to dominate the landscape, both of the living space we come from and of that which we wish to visit. Someone who lives in the San Francisco Bay Area might legitimately wonder if the means has overtaken the end. Average freeway throughput in parts of Los Angeles is less than twenty miles in the hour, and no betterment is in sight. Daily travel times to and from work not unusually exceed three hours. Was this foreseeable forty years ago when the car was promoted over public transport as freeing us all from constraint? Computer networking in general, and the Internet in particular, has been promoted as leading us from constraint. Is this in any way accurate? Can we predict now the long-term organisational consequences of network use? Rochlin observes that in freeing ourselves from one set of technological constraints, we embrace others, and he attempts to identify and extrapolate those associated with networking. He presents his arguments for a general audience in a pellucid and thoughtful book, which is at the same time an elegant example of the publisher's art, reminding us that message can be enhanced by medium. There are occasional flashes of limpid prose, such as in the elegant short description of packet-switching and TCP/IP for normal people (Chapter 3, p40).

Rochlin is an expert in the behavior of large organisations. He has conducted significant technical studies on so-called high-reliability organisations, including U.S. Navy carrier flight operations (RoPo87); the Iran Air 655 shootdown by the Vincennes (Roc91); and consequences of computerisation in the Gulf War (RoDe91.1), (RoDe91.2), which he calls upon in this book. For him the paramount question is how sections of society organise themselves around computer network use and what the consequences have been and will be. He utilises an array of recent sociological work besides his own, including that of Perrow (Per84), familiar to many computer scientists, as well as recent studies of computers in the workplace.

According to the cover blurb (which is, in keeping with publishing tradition, quite overblown), Rochlin

...takes a closer look at how [the] familiar and pervasive productions of computerization have become embedded in all our lives, forcing us to narrow the scope of our choices, our modes of control, and our experiences with the real world. The threat is ..... the gradual loss of control over hardware, software and function through networks of interconnection and dependence. ..... The varied costs include a dependency on the manufacturers of hardware and software -- and a seemingly pathological scramble to keep up with an incredible rate of sometimes unnecessary technological change. Finally, a lack of redundancy and an incredible speed of response make human intervention or control difficult at best when (and not if) something goes wrong. ....

He observes particular aspects of computer use: in office work, in computer financial and commodity trading, in the stock market, in glass cockpits and air traffic control, in military systems and some of their famous accidents, and the logistics of `techno-war', and C3I systems. He contrasts social features of the new structures with features of those they are replacing, and thereby attempts to sustain an argument for the consequences of networked computerisation in the large.

Because of my academic specialities, I found myself particularly interested in what Rochlin had to say in Chapter 2 (`Automagous Technology', the general social dynamics and disadvantages of computerisation), Chapter 3 (`Networks of Connectivity', the technology and growth of networking and the Internet in particular), Chapter 7 (`Expert Operators and Critical Tasks', concerning especially `glass-cockpit' and `new-generation' air traffic control, as well as classical industrial human-machine issues), and Chapter 9 (`Unfriendly Fire', concerning three air-defence incidents, two of whom are the Stark and Vincennes during the Iran-Iraq war).

A Reformulation of the Basic Argument

How does simple computer use turn into the kind of complex organisation worth writing a book about? The primary idea behind the first use of computers was to perform repetitive calculations that would have taken humans an impossible amount of time. This function developed into other ways of aiding daily tasks, at first at work and later at home. First, database management for salesmen, and then later, checkbook balancing for the housekeeper. But there remains for many of us a serious question, why one would want to balance a checkbook by computer when it's so easy to do by hand? And if one finds balancing a checkbook difficult, is performing a complex series of ordered keystrokes really going to help much? And how does one tell that one has the right answer? The answer appears to be yes, performing keystrokes does appear to help, although one suspects the reasons might be psychological. Just supposing something will help may indeed turn it into a facilitator. But when the object is a computer program, one becomes dependent on hardware and software experts and their support to accomplish something that could as well be done without. That's the first argument.

So computerisation leads to dependence upon computer experts, and when those experts are also in business to make their living from this dependence, the goals of both parties come into conflict in well-known ways. Add to this a step into the highly complex, ill-understood and somewhat technically fragile world of computer networking. Highly complex systems often exhibit `emergent behavior' - behavior of the whole which cannot be described easily in terms of the behavior of the individual parts. The social problem transforms itself from one of resolving conflicting goals to one of trying to understand the mechanisms of cause and effect, of finding out which machine you've never heard of is spoiling your day; and further, of determining which emergent behavior is disrupting your work. That is the second argument.

It remains to establish the arguments, accomplished appropriately by appeal to the examples. We consider the examples, and whether they support the arguments, below.

The Contra-argument

Should we be so worried? Many of Rochlin's arguments are to the effect that little has improved - that our goals are not necessarily furthered, and the problems we engender can in fact conflict with our goals. But there are also situations which have been improved immeasurably by computerisation and in particular by computer networking. My work as an academic author and disseminator of my scientific observations has been unequivocally enhanced; firstly by use of editing and word-processing devices and more lately by a switch to World Wide Web-based writing and dissemination. I can barely conceive of how it must have been to write technical papers in the days before good editors and document preparation tools, although I grant that many managed (1). Without the WWW, I had to send copies of documents to interested parties on a mailing list. Now, work I put on the WWW is read by people whose names, professions and interests I do not know, save that they are interested in what I might have to say on a particular topic. When internet communication breaks, as it did during the Morris worm incident and as it did again the other day when corrupted name service databases were disseminated, I can wait a few hours, or a day or two, and it's back. And in the meantime, I can work standalone on document preparation. While I've been using it, a period of fifteen years, I can't say that internet communication has ever seriously broken in a way that profoundly affected my working life.

Furthermore, my teaching materials are on the WWW and available promptly and cost-free to every student, whether in Bielefeld or Bratislava, who wishes to use them. Student seminar work, slide presentations and papers, on interesting topics are available from the same source on a relatively permanent basis. A number of my students have thus been inspired to write what amounts to chapters of a detailed text on the topics they chose to report on.

So despite our dependence on equipment vendors and their rivalries, my computer-based document-preparation and dissemination works and internet communication works. Both have relatively straightforward and transparent structures, in contrast to those of other technologies such as automobiles. I would compare internetworking technology to a pre-70's VW Bug. It basically runs, and when it goes wrong, with a certain amount of basic knowledge you can usually fix it yourself. Of course, most people choose not to, and there are many vendors of more sophisticated ways of performing these basic operations. But if you buy a 16-valve electronically-controlled BMW, you are usually not planning on home maintenance, and neither are you planning on a cross-Sahara drive.

I suspect that generalising about networking suffers similar drawbacks to generalising about cars. There are different types of net, with different purposes and different reliabilities, just as there are different types of cars; and if you don't pick the right one for the job, you're sunk. Your Ferrari won't like idling in traffic jams while commuting every morning, you can't park your Chevy Suburban on the sidewalks in Paris, you can't take the family on a camping trip in your Cinquecento (OK, I admit that's culturally relative), and you can't expect to cross the polar ice caps in your VW Bug. The history of computing and networking is full of those who tried similar things, and miserably failed. Why should we be at all surprised, let alone worried?

But, the argument might go, I'm a Professor of networking, and I'm supposed to understand such stuff, just as we would expect the Professor of Auto Mechanics to know how to fix his VW Bug. The important point is not that the Professor knows, but that those whom he teaches both learn and do it. Are there more competent internetwork system administrators in Bielefeld than competent auto mechanics? Probably not yet, but judging from my experience with auto mechanics in Silicon Valley, there sure seem to be more competent sysadmins thereabouts. In any case, it seems to be of comparable orders of magnitude.

In course of this observation of how the WWW has enhanced my working environment, and how it relates to Rochlin's arguments, it is worth noting both that the rapid rise of the WWW itself has been regarded as an `unanticipated consequence of computerisation', and that the perception of the irreversible change in publishing behavior is shared by many who also welcome it, for example (Odl95).

So does the use of computers enhance my work? Most certainly, yes. Am I somehow more at risk? I don't really see how. With a regression to 70's technology (modems and telephone lines, UUCP) and armed with my list of correspondents and correspondence from the previous 15 years, I could in principle relatively easily overcome any disruption to the Internet short of a complete breakdown of telephone service. Could somebody destroy my computing life while I sleep? Like all net administrators, we maintain physically separate data backups, and my portable, which mirrors my most important current work, is physically disconnected from the net when I'm not using it. The residual risk is physical risk and safety of a sort that is not especially concerned with computing or internetworking.

Am I in hock to the vendors? Not really. I can buy a new computer from any number of hardware vendors for the price of a flashy bicycle, and the operating system, Linux for example, I can get for free. Just the way most of my students do it now. So what's the problem?

Emergent Behavior

The problem which Rochlin does most to address is that of the emergent behavior of networked computing. Let us compare with the situation with cars. If I had bought my VW Bug in order to cross the Sahara, then I would have been daft. But who would have imagined that its main running state, in SF Bay Area commute traffic, would be a cyclic 25-minute sequence of 20 minutes at 1,000rpm in first gear at 5mph with 15 clutch disengagements per minute, followed by 5 minutes of 3,500rpm in top gear? And who could have imagined buying a Porsche for these conditions (nevertheless, people did and do)? And who could have imagined the worsening of air quality, the continual background noise, and the rendering of large amounts of public space unsafe for unattended small children, not to speak of the few but nevertheless all-too-common maiming and deadly accidents?

Some salient observations about emergent behavior in networking are:

`Distancing' Workers from Tasks

After introductory chapters surveying current perspectives on human use of technology and the history of networking, Rochlin compares the use of computers in the workplace with Taylor's time-and-motion studies and his thoughts on `scientific management'. Issues such as `deskilling' are handled, as well as assignment of responsibility in heterogeneous systems. Deskilling, or the removal of workers from `direct' contact with the items of the task they are charged with accomplishing, is a hot issue not just in business management but in other domains such as process control and aircraft piloting, in which it is often asserted that the pilot who exercises `supervisory control' over the automation is less in touch with the task and less ready and able to take over when things start going wrong. A conception of the extra `levels' inserted into `contact' between pilot and machine is given for example by (Bil97, Fig. 3.2, p36).

The question which I often ask myself about this concept is how exactly is the work defined, and how may one measure `distance from the task'? In safety-critical enterprises such as aviation, there are paradigms to work from. For example, moving a control wheel could be regarded as `distant' from the task of moving the ailerons up and down, but it is traditionally not so regarded, both because of its long history of success in aviation, and its standardisation for almost a century. One could imagine that in 2010, programming a flight management computer could be regarded as a `core competency' of a pilot - after all, how else is the airplane to fly this complicated approach with all the heading changes, glide-path-angle changes and altitude restrictions, while processing traffic alert warnings and sequencing instructions from air traffic control? When all goes well, the automation in use now is able to fly stabler, simpler approaches than can be flown by hand. And in some cases, there are approaches in really bad weather (where there is no or very limited visibility on the ground) which are not permitted to be flown `by hand'. Programming and monitoring the autopilot is simply part of the task of piloting. You can't just `switch it off' any more.

One could legitimately describe the pilot's task in programming and monitoring the autopilot as one of `supervisory control'. That is, the same control behavior is required as before, but this control behavior is now largely implemented in the autopilot, and the job of the pilot becomes one of ensuring that the autopilot does what the pilot would do (and hopefully more reliably, or better, or both). There are well-known problems with performing supervisory control (Bil97). One suggestion could be: regard the step to requiring supervisory control as providing one measure of `distance'.

Let us now see if this suggestion is reasonable. Consider another example of aviation automation. New aircraft such as the Airbus A320/330/340 series have `control laws' that differ substantially from `traditional' control laws. In certain control `modes', a full backward deflection of the `control stick' gives maximum climb capability, rather than maximum deflection of the elevators as in `traditional' aircraft. Thus the A320 cannot stall on maximum backward stick deflection in these control modes. Similarly, a given thrust-lever position in certain modes no longer corresponds to a given thrust level, as on `traditional' aircraft, but to a given airspeed. The engine thrust will adjust to maintain this goal. This gives a different connection from pilot to aircraft control, but it is not a step to supervisory control. It is a switch to different laws of control. There is no obvious argument in this case that it is `further away' from the task. It might even be argued to be nearer. The argument would be from cognitive capabilities - the point of setting thrust on descent is usually to achieve a target airspeed, and when pitch changes, the thrust on a traditional aircraft must be readjusted to recapture the target airspeed, unlike that on an A320. On the A320, it could be argued, the pilot's task is more directly connected with the goal.

The lesson here is probably to not conflate the two situations. Supervisory control is one aspect, with its own advantages and cognitive problems, and altered control is another, with its own set of different advantages and problems. Not all aircraft automation is a transition to supervisory control, and not all transitions to automation involve cognitive `distancing' from the task. It seems that a fairly careful taxonomy of automation is needed in order to relate the type of automation to its consequences, and it doesn't appear that anyone's developed one yet.

In his short section on Pilot Error (Roc97, pp112-115), Rochlin doesn't distinguish the benefits of hands-on skills (UA 811 in Hawaii, whose pilot landed the B747 securely after a significant part of the fuselage surrounding a baggage door blew off; UA 232 in Iowa, which lost essentially all flight controls after an uncountained engine failure severed hydraulic lines, but whose pilots performed a highly unusual and skilful landing; an Air Canada B767 which ran out of fuel and whose pilot performed a superb dead-stick landing) from difficulties caused by mode confusion and lack of attentive supervision (presumed in the A320 Strasbourg accident, also in the A320 Bangalore accident), from confirmation bias allied with visual-perception issues (Rea90, Section 6.4, p89) (the Kegworth accident to G-OBME, exacerbated because the pilots received a false-positive result to their initial actions). It's not clear that mode confusion and confirmation bias are correlated at all with lack of hands-on skills, as Rochlin's section on pilot error would lead us to believe.

Let us move on to consider `distance' in more traditional industrial jobs, which do not necessarily have a safety-critical component as does flight control. A worker performs a task that is defined in a social, cultural and technological environment. When the technology changes, so that for example different automation is introduced, the behavior required of the worker to accomplish the goal of the task may change significantly. Instead of using a power driver to attach components together, the worker may instead manipulate switches in order to influence a robot that does these things. There was always a distinction between goal (`fasten these two components together') and task (`use a power driver in such-and-such a way'). What measure of distance could be used to suggest that manipulating a robot is `further' from the goal? One could try to assess the supervisory control, but this may not yield as stark a contrast as in flight control. One manipulates a switch, and either the robot performs the task or something goes obviously wrong and one shuts the thing off. Similarly, when attaching components with power tools, one applies them correctly, turns them on and completes the task, or something starts going wrong and one switches them off. I imagine it will be hard to justify philosophically a significant qualitative difference between the two situations.

I suspect there are residual arguments from Marx and Weber concerning `alienation' of `labor' lurking behind concerns about distancing. While such a concern may be valid, we require an objective, non-relative definition of `distance' between task and goal. To my knowledge, this has not been given, neither in this work nor in others. My examples have argued that `distancing' may play a smaller role than hitherto imagined in many forms of industrial automation.

An orthogonal issue concerns the possibilities that computerisation allows company management for microcontrol of workers' behavior and accomplishment. Stories abound of companies surveying employees' email and other electronic products of work time, in order to see that the products contribute towards company goals. Indeed, this practice seems to be spreading wide. One can legitimately argue that discretionary control over subtasks is positively correlated with worker `satisfaction' and that worker `satisfaction' is positively correlated with more efficient goal accomplishment in the workplace. And conversely, that microcontrol leads to resentment, and resentment correlates negatively with efficient goal accomplishment. Such arguments have some force, and it is legitimate to wonder about their deeper effects upon the sociology of work.

Rochlin argues that this intrusive and distributed control is unwelcome and socially unhealthy and to treat it as a disadvantage of computerisation. I accept the argument that networked-computerisation of work tasks requires us to look at the changing control situation and assess its impact. But it seems almost contradictory to accept on the one hand the enormous cognitive difficulties of supervisory control of automated systems in aircraft, while on the other hand to believe that supervisory control could function so much better in the management hierarchies of companies. An explanation is needed of why supervisory control is so hard in inflexible, relatively inadaptive hierarchies (a complex flight management system) and so easy in flexible adaptive hierarchies (human management hierarchies). One would rather have expected it to be the other way round, since adaptive hierarchies tend to support subversion.

Stock Trading and Financial Markets

Rochlin claims that `no other major human activity has moved so quickly to the edge of "cyberspace"' as stock and financial market transactions (Roc97, p75). He notes that market trading and analysis was `once a craft skill, acquired by apprenticeship and experience', and that with the networking of trading, it is less and less so. One may indeed remark, as he does, that the locus of control in market trading has shifted, and that managerial responsibility cannot now function as it used to (he cites the Baring collapse and other recent instances of rogue trading). Rochlin also observes that the inbuilt hysteresis of the old system is not present in the new; that the markets can react almost instantly to trends that would have taken hours or days before, and that therefore certain types of positive feedback loops leading to what dynamical-systems people call singularities and what traders call crashes are now possible, and used not to be. This observation has been made also by mathematicians who work on such dynamical systems. The question is whether this is a `bad thing' or just another `new thing' that the markets and their regulators will somehow adjust to.

Rochlin's argument in this case is based also on distance. He claims that computerisation leads to distance from what we may call `real worth'. New computer-based financial instruments are being developed, whose evaluation in terms of the old instruments (money, stocks, bonds, futures) is by no means clear. Developers of these new instruments claim that the market sets worth, as it always did; that these instruments have a price at which they sell, and that this price by the standard equilibrium arguments reflects whatever worth they have. This argument of course is based on the assumption that the value of an instrument can be measured on a rational scale. There may be reasons to doubt this, but it is a uniform assumption in financial and stock markets. The market as a whole uses many measures to evaluate companies (debt-to-equity ratio, price-to-earnings ratio, growth, etc) and I see no fundamental reason why evaluation of complex mathematically-constructed instruments should be on a ratio scale (2). But let us move on, for Rochlin's criticism rests on the difficulty of evaluation, not the measurement structure that is used.

Searle has given a sustained argument showing how certain features of social reality, such as money, are formed (Sea95). Extending this argument, I can see no fundamental difference in social construction between complex financial instruments and simpler ones such as money. One may see how the concept of `worth' given to money came about, and one may see that the construction in the case of money is somehow easier than it is for more complex instruments. But the argument that complex instruments are not like more traditional worth-bearing institutions is not an argument that demonstrates their lack of viability. If money obtains its `worth' by the same social process enumerated by Searle as that by which the new instruments obtain their role in society, there is no fundamental reason to dinstinguish the `worth' of traditional instruments from that of new ones, except tradition itself. So I don't see that Rochlin's argument from the difficulty of relative evaluation of non-traditional instruments against traditional ones demonstrates that somehow the new instruments are built on sand. Certain of them, even all of them, might be, but rather because their game-rules cannot lead to market-like behavior; because they are less rule-governed and more like simple gambling (Lotto) than like informed gambling (buying a company's stock). And the behavior of an instrument can indeed be affected by the technology through which it is implemented. But Rochlin's argument is simply too general, depending on general characteristics such as hysteresis and lack of centralised trading, to lead rigorously to a conclusion that these new instruments cannot admit of market-like behavior.

Safety-critical Systems in Aviation

Rochlin introduces a useful phrase, `having the bubble', for the situation in which air traffic controllers and pilots mostly find themselves - in which they have cognitive command of everything relevant to their task that is going on around them. He compares some almost-miraculous saves (the UA811 accident out of Hawaii, the UA232 accident in Sioux City; the Air Canada B767) with some automation-related accidents (Kegworth; a B737 overrun at La Guardia; Bangalore; Strasbourg), and discusses automation in air traffic control, but all this not in sufficient detail to analyse satisfactorily the different types of automation and errors involved. Readers looking for an introduction to aviation automation may like to start with (Bil97).

Rochlin goes on to discuss automation in process control, along the lines of the account given by Rasmussen and colleagues, in terms of abstraction hierarchies and classification of operator behavior into skill-based, rule-based and knowledge-based (RasJen74); (see also (Ras86) and (ViRa92)). Another seminal work in this area is (Rea90), which I have aleady mentioned. I was disappointed not to find Rochlin discussing the the work of Rasmussen and Reason, for example, for I feel it would have made his points on human operation in process control more telling.

Military Accidents

After a short chapter discussing the increasing computerisation of the military, placed in its historical context in the evolution of how the military has functioned, Rochlin discusses three military accidents: the shootdown of a Libyan airliner by Israeli fighters in 1973, the Exocet attack by an Iraqi fighter on the USS Stark in May 1987, and the shootdown by the USS Vincennes of Iran Air 655 in July 1988. Rochlin recites the common themes: coincidental rare situations, the inapplicability of standard procedures, and what he calls `cognitive lockout', that an operator or a group of operators form a mistaken image of the situation as a framework against which to interpret the development of events, and under stress become more open to evidence that supports this image, while more apt to ignore evidence against it. This lockout can become extreme; for example, in the case of the Vincennes and Iran Air 655, `the record shows that the decision to fire was taken more or less calmly and deliberately on the basis of personal advice passed from junior officers to the senior AAWC, and from the AAWC to the CO - in the face of a stream of contrary evidence from the electronics aboard. ... [The accident investigation board] concluded that "stress, task-fixation and unconscious distortion of data may have played a major role in this incident."' This accident provides perhaps an extreme example of confirmation bias (Rea90, Section 6.4, p89).

Rochlin has studied the Vincennes-Iran Air 655 accident (Roc91). The accident took place in the middle of a fire fight with Iranian gunboats, at a time when the Vincennes had to make an extreme manoeuver at high speed, which caused physical chaos in the fighting systems control room. The Navy recognised the contributions of technology and the constraints of decision-making, as well as the battle situation. It attributed the distortion to two relatively junior staff, who became convinced that the track of Iran Air 655 was that of an F-14 after a system report of a momentary transponder return compatible with that of an F-14. The CO was decorated. Rochlin's concern is more general: to analyse the `manifest failure of the decision-making system on the Vincennes to interpret the evidence correctly.'

Rochlin's short analysis of the Vincennes accident is enlightening and relatively convincing, and induces one to want to read the longer version. However, I query how much this accident has to do with heavy dependence on new technology. One military accident that Rochlin does not discuss is that of the shootdown of KE007 over Sakhalin island in August 1983. This shootdown was ostensibly the result of confirmation bias on the part of the Soviet interceptors, although this is by no means the sole causal factor in the accident. This bias has been substantiated both by the voice tapes of the Soviets, and the Japanese Self-Defence Force recordings, but also by interviews with the pilot who shot down the aircraft. This was a very low-tech shootdown - the only relevant technology was radar and radio communications, and the Soviets had already failed to find the aircraft when it passed over the Kamchatka peninsula some time earlier. Soviet fighters are controlled by ground radar operators and led to an intercept and, supposedly, visual identification. Concern that the aircraft was about to exit Soviet airspace, coupled with the conviction of the attacking pilot that the aircraft was military, and the rushed and incomplete identification procedure, led to the shootdown. Other factors include the flight behavior of KE007, the fact that it was so wildly off-course, maybe also the coincidental missile testing taking place on Sakhalin Island.

A comparison of the Vincennes/Iran Air 655 accident with the KE007 accident shows many similar features. Odd behavior on the part of commercial airliners (in the first case, flying directly over a firefight; in the second case, hundreds of kilometers off course, in Soviet airspace, varying altitude); the perception that the aircraft was simulating the behavior of military aircraft (the Vincennnes crew may have believed that the aircraft was descending towards the ship whereas it was climbing; KE007 was perceived to undertake altitude excursions that were interpreted as evasive manoeuvers); the difficulty of making a concrete identification; and a sense of urgency which precluded reception or correct interpretation of all the procedurally-required evidence. The high-tech systems on the Vincennes provided her controllers with much more evidence than would have otherwise had been possible, but the evidence on which the investigating board laid weight - the momentary transponder return - is old radio/radar technology. In comparing the Vincennes/Iran Air 655 accident with KE007, I am led to wonder not how the technology aboard the Vincennes contributed to the accident, but how the large amount of contrary evidence it provided failed to contribute to the correct decision of the crew. One lesson I would draw is that the comparison clearly shows that the human system is paramount. This conclusion is drawn by Rochlin. But he also draws the conclusion that the configuration of the technology contributes significantly to this behavior. In contrast, I am tempted to conclude on the basis of the comparison of IA655 with KE007 that the technology seems to be relatively unimportant. It provides the context in which the crew behavior takes place, but the characteristics of the behavior are largely similar. One way in which the technology could differentially contribute is via the `the computer is always right' syndrome - the tendency to believe rather than doubt what one perceives the technology is saying. Rochlin does not induce this factor, although he does note the fact, that the technology was saying the right things but was nevertheless misinterpreted. Since Rochlin's examples and argument could support my proposition as easily as his, I conclude that the argument he gives for his conclusion is incomplete.

In conclusion, I believe a comparison of the KE007 with the Iran Air 655 accidents would have demonstrated the relative priority of cognitive behavior, and its relative independence of technology, in determining the course of events in these accidents. This is seemingly contrary to the conclusion that Rochlin would wish to draw.

Techno-War and C3I

Rochlin devotes the last third of his book to the history of and consequences of computerisation of war-fighting and C3I - command, control, communications and intelligence. I don't consider myself qualified to comment either on the history or sociology of war-fighting, or on C3I, but I found both chapters interesting and (hopefully) enlightening. Rochlin contrasts the traditional US war-fighting strategic emphasis on logistics and preparation with the more recent high-tech strategy, supposedly as exhibited in the Gulf War in 1991. He notes that use of high-tech weaponry does not obviate the need for logistics, indeed it increases it manyfold; and notes some of the potential consequences of the move towards installing C3I in `Cyberspace' . Rochlin and his colleagues have studied the organisation of the Gulf War (RoDe91.1) (RoDe91.2), as well as U.S. Naval Carrier Flight Operations (RoPo87). and I would presume that his observations are based upon these deeper studies. This discussion forms over a fifth of the book. I found it interesting and informative.

There was considerable discussion in the computer science and military communities in the 1980s over `Star Wars', the possibility of providing the US with a defensive shield against intercontinental ballistic missile attacks. The computational aspects of the proposal were assessed by some eminent computer scientists as technologically impossible at that time, and the reasons for this recommendation were discussed in public. Some scientists considered that since the computational project goals were not achievable, it was inappropriate for them to use research money in supposed pursuit of those goals (although not necessarily inappropriate to use the same money for research in support of lesser but achievable goals). The public debate was led by David Parnas, who argued closely and carefully for his view (Par85), generating some lively and informed discussion (Gor86), (Mor86), (Ver86), (Par86), (Ral87), (Wei87). Computer scientist Alan Borning wrote an extensive essay on the reliability of computer systems and their use in command and control systems for nuclear weapons (Bor87). These studies are well-known amongst computer professionals, appearing as they did in the Communications of the Association for Computing Machinery, and it is a pity not to see them discussed explicitly by Rochlin.

Are Nets at Fault? - Conclusions

Gene Rochlin has written a highly-readable and well-presented book on the consequences of computerisation in various human endeavors: work in general; stock and financial market trading; safety-critical systems such as nuclear power, air-traffic control and piloting; the military in general, high-tech war and C3I, and an analysis of high-tech accidents. A major theme has been that installing networked computers has unintended consequences which are not always either beneficial or predictable. He tries to summarise these consequences by highlighting commonalities in the analysis of networked computerisation in the various different human endeavors he considers.

Computer scientists have been interested in and occupied by similar, if not the same, questions. It was refreshing to this computer scientist to see the problems discussed from the point of view of organisational behavior. However, many of the particular issues dealt with by Rochlin have become the topic of significant multidisciplinary investigation, and one wonders if his points could have been more forcefully presented had he made more significant use of these sources.

Let me also observe that Rochlin has mostly considered specific computational domains. In hip terminology, his examples of nets are all intranets, fulfilling certain specific functions for particular users. No detailed argument against networking in general has been provided, and I doubt whether one could effectively be made along these lines (see Section The Contra-argument above). The interesting and sometimes worrying consequences considered by Rochlin seem to me to be specific to the kind of net used; to its goals and its constraints. He deplores the speed of correct reaction in trading markets, while noting its slowness, incompleteness and inaccuracy in battle management; he highlights the inefficacy of supervisory control in aircraft piloting, while worrying about its effectiveness in workplace management.

Furthermore, not all the significant features of the systems he considers are associated with their network properties. Fly-by-wire and otherwise highly automated aircraft are more properly described as distributed or concurrent systems. In stock or financial markets, each computational entity is simple and similar (a trading terminal), but not all players are known, and the whole networked system may exhibit emergent undesired behavior (stock trading bubbles); in an automated aircraft, all the players (flight control computers, autopilot, flight management system, air data computers) are known to the designers, each computational entity has a specific function to perform, the interactions can be mathematically difficult to analyse (many of the modern theoretical problems in concurrent systems were discovered in the 70's while considering digital flight control systems), but the states and results are forwarded to a central operator (the pilot) who has certain power to make system control decisions. Supervisory control issues therefore arise with automated aircraft but not with automated stock markets.

Despite the significant attempts to equip military organisations with high technology, similar failures arise that high-tech was intended to avoid (Iran Air 655 vs. KE007), the logistics problems may be increased rather than decreased, and accurate field information is still largely lacking (witness the U.S. Government Accounting Office report on the Gulf War, which concluded that there was a significant lack of intelligence information). While military networking may indeed have dire consequences, as Rochlin worries, nevertheless the successful Internet is distributed, simple and robust, like (or, one may argue in light of the East Coast outages, even more than) the telephone service, enabling people such as myself to transform our work environment completely, and incontrovertibly for the better.

If there is a common theme to emerge, it must be that the consequences of computerisation are tightly bound to organisational goals and to the design of the computerised infrastructure. Sometimes (Iran Air vs. KE007) computerisation doesn't seem to make a bit of difference. Sometimes (computerised markets; automated aircraft) it makes a world of difference, some of it desirable, some undesirable. In the world of networked automation, it seems, as in so much else, the devil lies in the details, if you believe he lies anywhere at all.

References

(Bil97): Charles E. Billings, Aviation Automation: The Search for a Human-Centered Approach, Lawrence Erlbaum Associates, New Jersey, 1997.
(Back)

(Bor87): Alan Borning, Computer System Reliability and Nuclear War, Communications of the ACM 30(2):112-131, February 1987.
(Back)

(Eco-94-6-18): The shock of the not-quite new, (Economics Focus column), The Economist, 18 June 1994, p85.
(Back)

(Gor86): Ed Gordon et al., Correspondence Concerning (Par85), Communications of the ACM 29(4):262-265, April 1986.
(Back)

(KrLu71): D. H. Krantz, R. D. Luce, P. Suppes and A. Tversky Foundations of Measurement, vol. 1, New York: Academic Press, 1971.
(Back)

(Mor86): Mike Morton et al., Correspondence Concerning (Par85), Communications of the ACM 29(7):591-592, July 1986.
(Back)

(Nar85): Louis Narens, Abstract Measurement Theory, Cambridge, MA: MIT Press, 1985.
(Back)

(Odl95): Andrew M. Odlyzko, Tragic loss or good riddance? The impending demise of scholarly journals, Notices of the American Mathematical Society, January 1995, available at http://www.ams.org/publications/notices/199501/199501-toc.html. A longer version is available from the Journal of Universal Computer Science, 0(0) (Pilot Issue), November 1994, at http://http://www.iicm.edu/jucs
(Back)

(Par85): David Lorge Parnas, Software Aspects of Strategic Defence Systems, American Scientist 73(5):423-440, 1985, reprinted in Communications of the ACM 28(12):1326-1335.
(Back)

(Par86): David Lorge Parnas Correspondence Concerning (Par85), Communications of the ACM 29(10):930-931, October 1986.
(Back)

(Per84): Charles Perrow, Normal Accidents: Living With High-Risk Technology, New York: Basic Books, 1984.
(Back)

(Ral87): Anthony Ralston et al., Correspondence Concerning (Par85), Communications of the ACM 30(1):9-11, January 1987.
(Back)

(RasJen74): J. Rasmussen and A. Jensen, Mental procedures in real-life tasks: A case study of electronic troubleshooting, Ergonomics 17:293-307, 1974.
(Back)

(Ras86): Jens Rasmussen, Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, New York: North-Holland, 1986.
(Back)

(RaDu87): Jens Rasmussen, Keith Duncan and Jacques LePlat, eds., New Technology and Human Error, New York: John Wiley and sons, 1987.
(Back)

(Rea90): James Reason, Human Error Cambridge: Cambridge University Press, 1990.
(Back)

(Roc91): Gene I. Rochlin, Iran Air Flight 655: Complex, Large-Scale Military Systems and the Failure of Control, in Renate Mayntz and Todd R. La Porte, eds., Responding to Large Technical SystemsL Control or Anticipation, pp95-121, Amsterdam: Kluwer, 1991.
(Back)

(Roc97): Gene I. Rochlin, Trapped In The Net: The Unanticipated Consequences of Computerization, Princeton University Press, 1997.
(Back)

(RoDe91.1): Gene I. Rochlin and Chris C. Demchak, The Gulf War: Technological and Organisational Implications, Survival 33(3):260-273, May-June 1991.
(Back)

(RoDe91.2): Gene I. Rochlin and Chris C. Demchak, Lessons of the Gulf War: Ascendant Technology and Declining Capability, Policy Papers in International Affairs. Berkeley: Institute of International Affairs, University of California, Berkeley, 1991.
(Back)

(RoPo87): Gene I. Rochlin, Todd R. La Porte and Karlene H. Roberts, The Self-Designing High-Reliability Organization: Aircraft Carrier Flight Operations at Sea, Naval War College Review 40(4):76-90, Autumn 1987.
(Back)

(Sea95): John R. Searle, The Construction of Social Reality, Penguin, 1995.
(Back)

(Ver86): Edward W. ver Hoef et al., Correspondence Concerning (Par85), Communications of the ACM 29(9):830-831, September 1986.
(Back)

(ViRa92): Kim J. Vicente and Jens Rasmussen, Ecological Interface Design: Theoretical Foundations, IEEE Trans. on Systems, Man and Cybernetics 22(4): 589-606, July/August 1992.
(Back)

(Wei87): David M. Weiss, Correspondence Concerning (Par85), Communications of the ACM 30(11):905, November 1987.
(Back)

Footnotes

(1): Rochlin himself appears to have been one. (Roc 97, Chapter 2, Note 18, p222) says:

My own memories of being a sometimes reluctant player in the rapid development of the now famous Berkeley Standard Distribution (BSD) version of UNIX remain quite vivid. From time to time there would issue by message from the computer center an announcement of a new release of the editor, or the formatter, or even the terminal definition program, that drove us not only to despair but to the center to pick up the new documentation. More than one user found that a year's leave from Berkeley required extensive relearning before it was possible to come up to speed again.
My memories of this time also remain quite vivid. I was the Unix Services Coordinator at that very same U.C. Berkeley Computer Center from 1979 through early 1981, a time period included in the BSD development period which Rochlin mentions. I regularly taught introductory courses in Unix and tools for University faculty and staff. I don't recall either the (in)famous ed, ex or vi editors changing significantly during this time. The basic Unix commands remain today, 18 years later, virtually identical to what they were then, as does vi, which is still the editor of choice for any system administrator (although I much prefer emacs, I mostly use vi when performing sysadmin tasks. Even emacs has remained for me cognitively virtually the same in the 14 years I've been using it.) I, too, am annoyed but mostly bored by the task of learning to use new versions of old-favorite software tools. But the situation in the PC world today appears to be orders of magnitude worse than in the Unix world that Rochlin mentions. While PC people advocate plug-and-play as a consequence of the magnificent forward development of Microsoft software, Unix systems such as those sold by Sun have always been plug-and-play. I take my new Sun machines out of the box, boot them up, and after ten minutes configuring a few net service files, they are fully-functioning members of my net.

The totally different experiences of Rochlin and myself, in the same institution at the same time, and even indirectly interacting with each other, lead me to wonder how many of the arguments presented in his book are a matter of extrapolation from personal perception and experience. Extrapolation from personal experience can of course also be an objective assessment of a concrete situation. But when other experiences of the same process differ so greatly, one is constantly looking for recognition and acknowledgement of subjectivity, and some attempt to ground observations in some objective measure - which in the case of this footnote describing his early Unix experience, I don't believe Rochlin has done.
Back

(2): A ratio scale is a particular form of measurement. A measurement is, mathematically speaking, a mapping (a homomorphism in fact) of whatever is being measured into a relational structure. There may be more than one way of producing this mapping, while nevertheless obtaining the same results. For example, I stand you up against the wall and mark where the top of your head reaches. I then put a tape measure between the mark and the ground to measure your `height'. I do the same for me. Now, whether I measure our height in centimeters or inches, the ratio of the value I obtain for you with the value I obtain for me remains the same - is `invariant'. In fact, if I find a number N such that

N.(my height in centimeters) = (my height in inches)
then this number N will also satisfy
N.(your height in centimeters) = (your height in inches)
and in fact
N.(anyone's height in centimeters) = (that person's height in inches)
In fact, for any two admissible measurement methods for height, there is a number N such that multiplication by N will uniformly translate height measured with the one scale into height measured with the other. This is the distinguishing property of a ratio scale. Height is thus measured on a ratio scale.

For comparison, let's consider an ordinal scale. Suppose I want to determine merely whether I am taller than or shorter than or the same height as you. Say that my tape measure shows that I am taller than you. I can transform the measurements I made by any monotonically increasing function (one in which, if the argument x is less than the argument y, then the value of the function for the argument x is less than its value for argument y), and I obtain the same result, that I am taller than you. So comparative height measurements are preserved by any monotonically-increasing function. However, ratio measurements are not: I may only be one centimeter taller than you, but there is a monotonically-increasing function that will transform this one centimeter into three meters, and I am certainly not that much taller than you. True ratio assessments are thus not preserved by all monotonically-increasing functions. For more on measurement theory and appropriate measurement, see (KrLu71), (Nar85).
Back