University of Bielefeld - Faculty of technology | |
---|---|
Networks and distributed Systems
Research group of Prof. Peter B. Ladkin, Ph.D. |
|
Back to Abstracts of References and Incidents | Back to Root |
Peter LadkinTechnische Fakultät, Universität Bielefeldladkin@rvs.uni-bielefeld.de |
20 October 1996, footnote added 23 October
On 16 September 1996, Vincent Verweij, a journalist with Zembla, the Dutch National Television Channel 3, asked me some questions about computer safety in aviation. The questions were pitched at a public-interest level, higher than the more technical level at which we techies tend to talk with each other. They are perceptive questions. Some feedback from colleagues has been included in my answers.
Victor used the term fly-by-wire to refer to all types of cockpit automation. This term has become a public 'buzzword' for computers-in-the-cockpit. However, the concept fly-by-wire means to technical people the replacement of the physical link between the pilot's flight controls - the controls that make the aircraft go up and down and turn left or right - with an electronic link through a computer. This is a strictly narrower sense than that of the 'buzzword'. There are many other computers in the cockpit - flight management and navigation computers, flight data computers for airspeed and so on, autopilots, computers to control the engines, and warning systems such as TCAS. These can also be critical for flight safety - see reports on the Puerto Plata B757 accident, and the Martinair B767 and A340 Flight Management Guidance System problem reports. Accordingly, I have edited Vincent's questions to replace the term 'fly-by-wire' by a buzzword I prefer: computers-in-the-cockpit, CIC.
All aircraft designed and built within the last 15 years have some computer technology in the cockpit, and they have a better safety record than older aircraft. But how much of that is due to the computers themselves is not clear. The overall design of these modern aircraft reflects all of our previous experience with earlier designs, so we'd expect them to be safer anyway.
The computers are intended to make flying easier [1]. And in general they do, making piloting tasks much easier at take-off and landing. But when things don't happen as expected, it can be hard to figure out quickly what's going on, and to cope with it.
But we also can't do without the computers. In the air traffic environment of the future, with so many people flying, computer technology like TCAS (the automated warning system which commercial aircraft now carry in the US to warn of other aircraft in the vicinity) will be essential for safety.
The safety of an aircraft depends on designing and building it to the highest standards of safety we know. The same goes for its computer systems. Computers in the cockpit are unavoidable for future safety. Leaving them all out is not an option. At the same time, we must pay careful attention to how well we design and build those computer systems.
That's a very interesting question. Some recent incidents lead me to wonder whether some of the computers might in fact be too complicated for pilots to use effectively in certain situations [2]. Maybe simpler is better.
The critical technology has usually been flown on research aircraft for many years before we see it in commercial aircraft. But you can't test everything out perfectly. Software errors by themselves are relatively rare. But not all computer-related errors are software errors [3]. Systems are becoming complex enough, and there are so many more of them flying around, that potential errors crop up more frequently. We're also becoming cleverer, so incidents that before we'd have blamed on the pilots are now being seen as total system design errors.
However, we should keep perspective. In 1995 there were about 60 fatal accidents and 110 non-fatal incidents to passenger and cargo transport aircraft world-wide (13). Just one of these raised significant questions about the use of computers (AA965 near Cali), and this was only one of many possible factors involved in the accident. In 1994, the figures are about 50 fatal and 80 non-fatal, with one fatal accident raising questions about computer design (but more about the pilots!), one high-visibility computer-related test-flight accident (the A330), and 4 or 5 reported incidents that were computer-related (14). So computers were involved in only a tiny proportion of overall airplane incidents, and this is miniscule compared with the number of people who die on the road world-wide each year.
The price of safety is vigilance. It's when you get complacent that the errors occur. Complacency is encouraged when you think you know everything. Then you get `surprised'. This is as true for designers as it is for pilots.
That we can do so much more, and so much more efficiently.
Controlling the complexity, getting the design right, verifying that it really works as designed, and making it easy for pilots to use.
The Martinair Boeing 767 suffered a failure of the electronic flight information system. That's a critical failure which warranted emergency handling. Imagine driving a car on the freeway and finding your windshield covered with mud. But it's as if they had a little porthole they opened to see through. They had `traditional' backup instruments: small, simple, robust electromechanical instruments that worked when everything else didn't. That's one good idea, but not everybody thinks it's the best. There was also a partial control failure which meant they had to land faster than normal. The pilots did a very good job of dealing with the situation. That's the first lesson. Good pilots are even more essential now.
The second thing it tells us is that computer system failures can be complex to figure out. They still don't know what went wrong.
The Boeing 767 is not a `fly-by-wire' design in the narrow sense. It has hydromechanical controls and has given exemplary service for a decade and a half. Given its record, it's one of the safest planes flying.
Pilots have lots of things to do at once. The lesson is: Keep it simple enough so that they retain a complete overview of everything, especially in critical phases of flight.
And subject the computer systems to thorough design review. When a light bulb fails, you simply replace the light bulb. When complex computer systems fail, often it's because of a design oversight. That's just the nature of computers. Any bug was there from the beginning. Bugs are different from light-bulb failures. We have to handle them differently.
I don't have the exact numbers, and I'm not sure if anybody does. But you could get a rough idea by going to, say, Schiphol, and looking at the aircraft types as they sit at the gate. The first true `fly-by-wire' aircraft to enter commercial service was the Airbus A320 in the late 1980's (15). Lots of people now fly the A320, as well as the A319 and A321, and its cousins the larger A330 and A340 airplanes. Boeing has just introduced the large new B777. When you fly one of these airplanes, you're being flown `by wire' in the narrow sense. But most aircraft in commercial service now have some critical computer systems whose complete failure could cause an emergency.
Yes I do, just as I believe that busy freeways are riskier than empty ones. I still travel on them. And take more care.
I've already mentioned TCAS, which is important for take-off and arrival safety. New computer systems in the control tower can give vastly improved information to controllers and make the skies much safer [4]. Making the systems fail-safe means spending money and time. Safety costs good money. I believe we should give it high priority. But it is important to keep things in perspective. Improving roads and providing convenient public transportation may save more passenger lives on the journey to the airport than we could save by doing anything at the airport itself.
(2): Nadine B. Sarter and David D. Woods, Pilot Interaction With Cockpit Automation: Operational Experiences With the Flight Management System, International Journal of Aviation Psychology 2(4):303-321, 1992. Back
(3): Nadine B. Sarter and David D. Woods, Pilot Interaction With Cockpit Automation II: An Expermental Study of Pilots' Model and Awareness of the Flight Management System, International Journal of Aviation Psychology 4(1):1-28, 1994. Back
(4): Victor A. Riley, Human Use of Automation, Ph.D. Thesis, Department of Psychology, University of Minnesota, Minneapolis, MN, May 1994. Back
(5): Earl L. Wiener, Crew Coordination and Training in the Advanced Technology Cockpit, Chapter 7 of (6). Back
(6): Earl L. Wiener, Barbara G. Kanki and Robert L. Helmreich, Cockpit Resource Management, Academic Press, 1993.
(7): Stanley N. Roscoe, Cockpit Workload, Residual Attention, and Pilot Error, Chapter 14 of (8). Back
(8): Stanley N. Roscoe et al., Aviation Psychology, Iowa State University Press, 1980.
(9): David O'Hare and Stanley N. Roscoe, Human Factors in Cockpit Design, Chapter 4 of (10). Back
(10): David O'Hare and Stanley N. Roscoe, Flightdeck Performance: The Human Factor, Iowa State University Press, 1990.
(11): E. L. Wiener and D. C. Nagle, Eds., Human Factors in Aviation, Academic Press, 1988. Back
(12): Michael T. Palmer, William H. Rogers, Hayes N. Press, Kara A. Latorella and Terence S. Abbott, A Crew-Centered Flight Deck Design Philosophy for High-Speed Civil Transport Aircraft, NASA Technical Memorandum 109171, January 1995, NASA Langley Research Center, Hampton, Virginia 23681-0001. Back
(13): David Learmount, Off Target: The World Airline Safety Review, 1995, Flight International, 17-23 January 1996, 24-34. Back
(14): David Learmount, Expensive Mistakes: The World Airline Safety Review, 1994, Flight International, 18-24 January 1995, 33-42. Back
(15): Airliners of the World Flight International, 6 - 12 December, 1995, pp49-86. Back
Another colleague points out that computerisation has been sold on economic grounds - replacing cables and hydraulic systems with copper wires (or, in the future, fiber-optic cables) saves weight. And let us not forget that the introduction of digital computerisation in the cockpit coincided with a board of inquiry report to the FAA that a flight engineer (and thus his salary) was no longer needed (1, p180). However,
Those interested in in specific investigations into pilot workload and
complexity in a computerised environment might look at the work of
the aviation psychologists Nadine Sarter and David Woods
(2)
(3),
the Ph.D. thesis of Victor Riley
(4),
and particularly the work of Earl L. Wiener,
for example
(5).
General introductions to the question of pilot workload may be found in
(7) and
(9),
and to human factors in general in these books, and also
(11).
The future: NASA has looked at the pilot workload and cockpit automation
question for the future High-Speed Civil Transport aircraft (HSCT) in
(12).
(End of Footnote [1]-Back to Text)
[2]: Again, Jim Irving doesn't agree:
[3]:
For instance, errors which fall into the category of
human-computer interaction (HCI) errors, such as those
of Cali and
Puerto Plata,
that I discussed in RISKS,
The Cali and
Puerto Plata B757 Crashes.
(End of Footnote [3]-Back to Text)
[4]:
One example is on the NTSB's
`five most-wanted' aviation safety improvements list.
This
safety recommendation
concerns enhancements to ARTS IIA and ARTS IIIA terminal
radar systems to provide
`Mode C Intruder alerts', i.e. alerts to the presence of general
aviation (i.e., light) aircraft using Mode C transponders in
restricted terminal airspace for which they have received no clearance.
The original recommendation arose from the inquiry into the
1986 mid-air collision over Cerritos, California. Installation is
proceeding apace, according to the FAA, but the NTSB recommends more
urgent installation.
(End of Footnote [4]-Back to Text)
Back to 'Incidents and Accidents'
Copyright © 1999 Peter B. Ladkin, 1999-02-08 | |
by Michael Blume |