University of Bielefeld -  Faculty of technology
Networks and distributed Systems
Research group of Prof. Peter B. Ladkin, Ph.D.
Back to Abstracts of References and Incidents Back to Root
This page was copied from: http://catless.ncl.ac.uk/Risks/11.78.html


Previous Issue Index Next Issue Info Searching Submit Article

The Risks Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 11, Issue 78

Monday 3 June 1991

Contents

o Lauda Air Crash
Paul Leyland
Carsten Wiethoff
Ralph Moonen
Mark Evans
o Re: AFTI-F16
John Rushby
o Lottery bar codes no risk, spokesman says
Martin Minow
o Re: Viper
Pete Mellor
o Re: The FBI and computer networks
Jim Thomas
o Re: Voting by phone
Bob Rehak
Larry Campbell
Arnie Urken
o Re: The Death of Privacy?
Brett Cloud
o Re: Credit reporting
David A. Curry
Bill Murray
---------------------------------------------

Lauda Air Crash

Paul Leyland < pcl@convex.oxford.ac.uk >
Mon, 3 Jun 91 13:36:39 +0100
   
   _The Times_ (London), Monday June 3rd 1991.
   
   		Lauda says crash engine failed
   
   New evidence that the crash of a Lauda Air Boeing 767-300 in Thailand had been
   caused by in-flight reversal of the thrust on one engine has stunned the
   aviation industry.  Niki Lauda, owner of Lauda Air, made the claim in Vienna
   after returning from Washington, where the flight data and cockpit voice
   recorders of the jet were being analysed.  The Austrian transport ministry
   supported the assertion.  If the diagnosis were confirmed, the accident would
   be unprecedented, Herr Lauda said.  The crash investigators have yet to
   comment.
   
   The computerised airliner's systems have the capacity for self-analysis and
   should have corrected such a basic error.  Reverse thrust is normally locked
   out during flight and only used on the ground.  It is thought to be virtually
   impossible for reverse thrust to be deployed when the engine is pushing at
   maximum, as the jet would have been.  The incident happened about 16 minutes
   after takeoff.
   
   Investigators for Boeing, which manufactured the thrust reversers, were allowed
   access to the crash site for the first time on Friday.  Herr Lauda said the
   flight data recorder was damaged and could not be used to analyse the crash.
   He said the cockpit voice recorder indicated an advisory light had come on
   seconds before the crash and that a voice was heard saying the light was
   glowing intermittently.  Seconds later, First Officer Josef Thurner was heard
   saying: "It deployed."  Herr Lauda took that to mean the reverse thrust was
   engaged.  The entire incident, from the moment of the first warning light until
   the plane broke up, took no more than a minute.  Last night a spokesman for the
   Civil Aviation Authority said that no checks were to be ordered immediately on
   767s owned by British airlines.
   
   

LaudaAir Boeing 767-300 crash in Thailand

Carsten Wiethoff < cnwietho@immd4.informatik.uni-erlangen.de >
Mon, 3 Jun 91 11:57:10 MET DST
   
     [...] Private Speculation: Could it be something like a simple overflow/sign
   error to cause the engine go from max forward to max backward thrust?
   
   

Lauda air plane crash

Ralph 'Hairy' Moonen < rmoonen@hvlpa.att.com >
Mon, 3 Jun 91 08:51 MDT
   
   [...] Now when the media make something out to be a computer error, they
   hardly ever are specific. What I want to know is, was it a hardware error, or a
   software error. And certainly, wouldn't a mid-air reversal of thrust just rip
   off the wing, leaving the plane to plummet down totally out of control? Any
   air-experts out there willing to comment on this?
                                                           --Ralph Moonen
   
   

Tiland air crash

Mark Evans < evansmp@uhura.aston.ac.uk >
Mon, 3 Jun 91 10:04:16 +0100
   
   [...] However the reverse thrust system is only used on landing. It would
   appear unlikly (not least because of the safety aspects) that reverse thrust in
   NOT under the direct control of the flight management system.  Also mentions
   mechanical interlocks, these are presumably in addition to the hydrolic systems
   actuating the thrust deflectors being turned off.  The principle is quite
   simple.  The thrust deflectors are locked in place unless the plane is on the
   ground (also landing gear is down) and the control in the cockpit is activated.
   
   Mark Evans, University of Aston, Birmingham, England
   
   
---------------------------------------------

Re: AFTI-F16 (RISKS-11.61, 62, 64)

John Rushby < RUSHBY@csl.sri.com >
Sat 1 Jun 91 16:15:06-PDT
   
        For RISKS readers interested in digital flight control systems (DFCS), I
   highly recommend the papers by Mackall and his colleagues on the AFTI-F16 (and
   some other) flight tests; [4] is particularly thorough and informative.
   
   The following extracts from [6] summarize some of the points (fairly, I hope).
   It seems that redundancy management became the primary source of unreliability
   in the AFTI-F16 DFCS.  *'s indicate footnotes; they, and the references are
   given at the end.
   
        John
   
        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
   
        A plausibly simple approach to redundancy management in an N-modularly
   redundant system is the "asynchronous" design, in which the channels run fairly
   independently of each other: each computer samples sensors independently,
   evaluates the control laws independently, and sends its actuator commands to an
   averaging or selection component that chooses the value to send to the actuator
   concerned.  The triplex-redundant DFCS of the experimental AFTI-F16 was built
   this way, and its flight tests reveal some of the shortcomings of the approach
   [2, 4].
   
        First, because the unsynchronized individual computers may sample sensors
   at slightly different times, they can obtain readings that differ quite
   appreciably from one another.  The gain in the control laws can amplify these
   input differences to provide even larger differences in the results submitted
   to the output selection algorithm.  During ground qualification of the
   AFTI-F16, it was found that these differences sometimes resulted in a channel
   being declared failed when no real failure had occurred [3, p. 478]. *1
   Accordingly, rather a wide spread of values must be accepted by the threshold
   algorithms that determine whether sensor inputs and actuator outputs are to be
   considered "good."  For example, the output thresholds of the AFTI-F16 were set
   at 15% plus the rate of change of the variable concerned; also the gains in the
   control laws were reduced.  This increases the latency for detection of faulty
   sensors and channels, and also allows a failing sensor to drag the value of any
   averaging functions quite a long way before it is excluded by the input
   selection threshold; at that point, the average will change with a thump [4,
   Figure 20] that could have adverse effects on the handling of the aircraft.
   
        The danger of wide sensor selection thresholds is dramatically illustrated
   by a problem discovered in the X29A. This aircraft has three sources of air
   data: a nose probe and two side probes.  The selection algorithm used the data
   from the nose probe provided it was within some threshold of the data from both
   side probes.  The threshold was large to accommodate position errors in certain
   flight modes.  It was subsequently discovered that if the nose probe failed to
   zero at low speed, it would still be within the threshold of correct readings,
   causing the aircraft to become unstable and "depart."  This error was found in
   simulation, but 162 flights had been at risk before it was detected [5].
   
        An even more serious shortcoming of asynchronous systems arises when the
   control laws contain decision points.  Here, sensor noise and sampling skew may
   cause independent channels to take different paths at the decision points and
   to produce widely divergent outputs.  This occurred on Flight 44 of the
   AFTI-F16 flight tests [4, p. 44].  Each channel declared the others failed; the
   analog back-up was not selected because the simultaneous failure of two
   channels had not been anticipated and the aircraft was flown home on a single
   digital channel.  Notice that all protective redundancy had been lost, and the
   aircraft was flown home in a mode for which it had not been designed--yet no
   hardware failure had occurred.
   
        Another illustration is provided by a 3-second "departure" on Flight 36 of
   the AFTI-F16 flight tests, during which sideslip exceeded 20deg, normal
   acceleration exceeded first -4g, then +7g, angle of attack went to -10deg, then
   +20deg, the aircraft rolled 360deg, the vertical tail exceeded design load, all
   control surfaces were operating at rate limits, and failure indications were
   received from the hydraulics and canard actuators.  The problem was traced to
   an error in the control laws, but subsequent analysis showed that the side air
   data probe was blanked by the canard at the high angle of attack and sideslip
   achieved during the excursion; the wide input threshold passed the incorrect
   value through and different channels took different paths through the control
   laws.  Analysis showed this would have caused complete failure of the DFCS and
   reversion to analog backup for several areas of the flight envelope [4, pp.
   41-42].*2
   
        Several other difficulties and failure indications on the AFTI-F16 were
   traced to the same source: asynchronous operation allowing different channels
   to take different paths at certain selection points. The repair was to
   introduce voting at some of these "software switches."  In one particular case,
   repeated channel failure indications in flight were traced to a roll-axis
   "software switch."  It was decided to vote the switch (which, of course,
   required ad hoc synchronization) and extensive simulation and testing were
   performed on the changes necessary to achieve this.  On the next flight, the
   problem was found still to be there.  Analysis showed that although the switch
   value was voted, it was the unvoted value that was used [4, p. 38].
   
        The AFTI-F16 flight tests revealed numerous other problems of a similar
   nature.  Summarizing, Mackall [4, pp. 40-41] writes:
   
   "The criticality and number of anomalies discovered in flight and ground tests
   owing to design oversights are more significant than those anomalies caused by
   actual hardware failures or software errors.
   
   "...qualification of such a complex system as this, to some given level of
   reliability, is difficult ...[because] the number of test conditions becomes so
   large that conventional testing methods would require a decade for completion.
   The fault-tolerant design can also affect overall system reliability by being
   made too complex and by adding characteristics which are random in nature,
   creating an untestable design.
   
   "As the operational requirements of avionics systems increase, complexity
   increases.  Reducing complexity appears to be more of an art than a science and
   requires an experience base not yet available.  If the complexity is required,
   a method to make system designs more understandable, more visible, is needed.
   
   "The asynchronous design of the [AFTI-F16] DFCS introduced a random,
   unpredictable characteristic into the system.  The system became untestable in
   that testing for each of the possible time relationships between the computers
   was impossible.  This random time relationship was a major contributor to the
   flight test anomalies.  Adversely affecting testability and having only
   postulated benefits,*3 asynchronous operation of the DFCS demonstrated the need
   to avoid random, unpredictable, and uncompensated design characteristics."
   
   Footnotes 
   
   1: Also, in the flight tests of the X31 the control system "went into a
   reversionary mode four times in the first nine flights, usually due to
   disagreement between the two air data sources" [1].
   
   2: However, the greater the benefit provided by DFCS, the less plausible it
   becomes to provide adequate back-up systems employing different technologies.
   For example, the DFCS of an experimental version of the F16 fighter (the
   "Advanced Fighter Technology Integration" or AFTI-F16) provides control in
   flight regimes beyond the capability of the simpler analog back-up system.
   Extending the capability of the back-up system to the full flight envelope of
   the DFCS would add considerably to its complexity--and it is the very
   simplicity of that analog system that is its chief source of credibility as a
   back-up system [2].
   
   3: The decision to use an asynchronous design for the AFTI-F16 DFCS was because
   "the contractor believed synchronization would introduce a single-point failure
   caused by electromagnetic interference (EMI) and lightning effects" [4, p. 7]
   --which may well have been correct given the technology of the early 1980s.
   
   References
   
   [1] Michael A. Dornheim.  X-31 flight tests to explore combat agility to 70
   deg.  AOA.  Aviation Week and Space Technology, pages 38-41, March 11, 1991.
   
   [2] Stephen D. Ishmael, Victoria A. Regenie, and Dale A. Mackall.  Design
   implications from AFTI/F16 flight test.  NASA Technical Memorandum 86026, NASA
   Ames Research Center, Dryden Flight Research Facility, Edwards, CA, 1984.
   
   [3] Dale A. Mackall. AFTI/F-16 digital flight control system experience. In
   Gary P. Beasley, editor, NASA Aircraft Controls Research 1983, pages 469-487.
   NASA Conference Publication 2296, 1984.  Proceedings of workshop held at NASA
   Langley Research Center, October 25-27, 1983.
   
   [4] Dale A. Mackall.  Development and flight test experiences with a
   flight-crucial digital control system.  NASA Technical Paper 2857, NASA Ames
   Research Center, Dryden Flight Research Facility, Edwards, CA, 1988.
   
   [5] Dale A. Mackall and James G. Allen.  A knowledge-based system
   design/information tool for aircraft flight control systems.  In AIAA Computers
   in Aerospace Conference VII, pages 110-125, Monterey, CA, October 1989.
   Collection of Technical Papers, Part 1.
   
   [6] John Rushby.  Formal specification and verification of a fault-masking and
   transient-recovery model for digital flight-control systems.  Technical Report
   SRI-CSL-91-3, Computer Science Laboratory, SRI International, Menlo Park, CA,
   January 1991.  Also forthcoming NASA Contractor Report.
   
   
---------------------------------------------

Lottery bar codes no risk, spokesman says

Martin Minow 01-Jun-1991 1222 < minow@ranger.enet.dec.com >
Sat, 1 Jun 91 09:33:46 PDT
   
   From the Boston Globe, Friday May 31, 1991 "Ask the Globe":
   
     Q. The state lottery's instant tickets now have bar codes with which
     sales agents can verify both their validity and the winning amount.
     What is to prevent agents or lottery officials from "reading" the
     codes on unsold tickets and taking the big prizes for themselves?
   
     A. Lottery spokesman David Ellis tells us that, once an instant
     ticket is "read" by a bar-code reader, it is invalidated. If a previously
     read bar code is reread, the computer "flags" the ticket as "previously
     cashed" or "cashed-stolen." The bar-code reader, Ellis adds, is "the first
     step in a long and complex series of protections" built into the game
     by Richard Finocchio, the lottery's computer manager. "The security system,"
     he concludes with pride, "is the envy of lotteries around the world."
   
   In sending this to Risks, I am *not* suggesting that confidence in their
   security system is misplaced: I can think of several ways to implement
   this that, given acceptable security at the Lottery central computer, would
   prevent cheating.  I can also think of several ways that would appear
   safe, but aren't.  However, both the question and answer raise interesting
   questions of trust (and, as dear Pres. Reagan would say, verification).
   
   Full disclosure: I play the lottery twice a year -- when they send me
   their advertising freebie.  In four years, I've won exactly one dollar.
   
   Martin Minow		minow@ranger.enet.dec.com
   
   
---------------------------------------------

Re: Viper (Randell, RISKS-11.73)

Pete Mellor < pm@cs.city.ac.uk >
Fri, 31 May 91 17:14:40 PDT
   
   > The only civilian customer for the technology has been the Australian National
   > Railway Commission, which at the end of 1988 adopted Viper as the core of a
   > new signalling system. The Australians have now gone so far with developing 
   > the system that they would have difficulty switching to another technology.
   
   When I last heard of them (see RISKS DIGEST 6.48, 7.18, 7.36, 8.41, 9.1), the
   Australian National Railway Commission was suing its commercial supplier
   (Charter Technologies??) for precisely the same reason that Charter is now
   suing MoD.
   
   Peter Mellor, Centre for Software Reliability, City University, Northampton Sq.,London EC1V 0HB +44(0)71-253-4399 Ext. 4162/3/1 p.mellor@uk.ac.city (JANET)
   
   
---------------------------------------------

Re: The FBI and computer networks (Bellovin, RISKS-11.77)

< TK0JUT1@MVS.CSO.NIU.EDU >
Fri, 31 May 91 14:36 CDT
   
   smb@ulysses.att.com writes:
   
   > I spoke a bit loosely when I used the phrase ``probable cause''; that's a
   > technical term, and does not (quite) apply.  Nevertheless, I stand by the
   > substance of what I said.  The FBI is not allowed to engage in systematic
   > monitoring of speech without reasonable suspicion of criminal activity.
   
   We can cavil over nuances of specific terms, but smb is essentially correct.
   The FBI (and other gov't agencies) are generally, either by statute or policy,
   prohibited from routine systematic monitoring of speech outside of specific
   investigatory needs.  This includes routine monitoring by informants or
   undercover operatives. Although the practice may be otherwise (see "The FBI's
   Misguided Probe of CISPES" by Gary M. Stern of the Center for National
   Security Studies), current policies are still (on paper, at least) guided by
   the Attorney General's Guidelines on FBI Undercover Operations" (Jan. 5, 1981)
   that restricts the use of undercover investigation, which is the most
   Constitutionally dangerous from of monitoring.
   
   There is surely nothing wrong with law enforcement agents, whether in a private
   or official capacity, participating in cybercommunication. The line is crossed
   when participation moves to surveillance, data acquisition, and record keeping
   as occurs in political surveillance.  The Secret Service may have overstepped
   this line when it set up it's "sting board" (Dark Side) in 1988 in a way that
   seems, uh, "empirically disfactual" with the SS Director's account of it.
   
   Jim Thomas
   
   
---------------------------------------------

Re: Voting by phone

Bob Rehak < A20RFR1@MVS.CSO.NIU.EDU >
Fri, 31 May 91 14:53 CDT
   
   I don't see how one can keep their anonymity in this vote by phone scheme.
   Passing laws against release of voter info or entrusting the security of the
   voter info to people makes me very nervous.  Having people in the loop causes a
   security problem by definition.  Would the people in charge of the 'voter' data
   center be willing to give up their right to privacy so we can monitior their
   activities so they can be stopped immediately if they try to divulge any voter
   information.  Will the system be TEMPEST secured so no electronic eavesdropping
   is possible.  Will the 'voter' data center be guarded by elite commandos sworn
   to defend it to the death.
   
   As I see it, if the data is stored somewhere, sooner or later the good guys or
   the bad guys will get hold of it some way or another either legally or
   illegally.  Once the genie is out of the proverbial bottle, it will be too late
   to put him/her back.
   
   Imperfect as it is, I like the way the system is now.  If people were really
   serious about voting, they would take the time to vote.  Having a vote by phone
   system will only allow for more abuse and fraud as some earlier respondents
   have suggested.
                           Bob Rehak, DBA At Large, BITNET: A20RFR1@NIU
   
   

Re: Voting-by-phone (RISKS-11.75)

Larry Campbell < campbell@redsox.bsw.com >
Thu, 30 May 91 23:42:33 EDT
   
   This (voting by phone) is a classic example of an attempt to solve a social
   problem by the misguided application of technology.
   
   What, really, is the problem for which this is supposed to be solution?
   
   Invalids?  Travellers?  We have absentee ballots that work quite well, thank
   you.
   
   Low turnout?  Just make election day a national holiday, as it is in most
   civilized countries (but not the US).
   
   Electronic voting?  Who needs it?
   
   Larry Campbell             The Boston Software Works, Inc., 120 Fulton Street
   campbell@redsox.bsw.com    Boston, Massachusetts 02109 (USA)
   
   

voting by phone

< AURKEN@VAXC.STEVENS-TECH.EDU >
Sat, 1 Jun 1991 06:59 EST
   
   There is a tendency to assume that computer-based voting systems such as voting
   by phone are more risky.  I say "assume" because all election systems are risky
   and depend on certain procedures being carried out with integrity. But anyone
   familiar with the manipulation of paper or mechanical voting systems would want
   clarification of the statement that voting by phone relies "excessively" on the
   honesty of the operators of the system. In fact, allowing the individual voter
   to check his/her vote avoids passive reliance on the integrity of election
   administrators.
   
   In this connection, verification of one's vote is possible under paper voting
   systems and others, but computer-based systems make it easier for the citizen
   to initiate and carryout verification.  I say this on the basis of 5 years of
   development work with experimental voting systems in a computer-networking
   environment.
   
   Grave-yard voting is a problem, but it is not clear that the problem
   would be worse with computer-based (phone) voting. It might be less
   frequent and more difficult to employ such maniupulation depending
   on the method of verification used.
   
   Re time-stamping: the latest schemes of time-stamping and cryptographic coding
   based on distributed (not centralized processing) do make it practically
   impossible to break the system. Put another way, the amount of information that
   needs to be collected to break the system is so great, that watch-guard
   programs would detect would-be criminals.
   
   Finally, using "none-of-the-above" is an improvement on plurality voting, which
   truncates information about voter preference orderings and is therefore
   undesirable from a theoretical point of view.  This may not seem directly
   related to voting by phone, but Roy mentioned it as a possibility and it seems
   reasonable to extend the idea to consider other methods of voting that do not
   distort the voting process.  Moreover, voting methods are related in the sense
   that it is not trivial to use them.  I have designed interfaces for
   computer-based voting with different methods of voting and found what works and
   what doesn't work.  Interfaces for voting by phone must be designed based on
   extensive experimentation and we may find that it is infeasible to use voting
   methods that have desirable social consequences.
   
   Arnie Urken
   
   
---------------------------------------------

Re: The Death of Privacy? (Robert Allen, RISKS-11.71)

Brett Cloud < hexmoon@buhub.bradley.edu >
Sat, 1 Jun 91 15:37:15 GMT
   
     I agree with the spirit of this posting.  Since we can not have complete
   privacy, we have at least complete access for everybody.  I can readily see the
   day come when liberty will have as part of its definition, the freedom to
   access information.
   
     People who abuse the system, could have their access taken away.  Much
   along the same lines as a robber being sent to prison.  Also, a person on
   parole could be given varying degrees of access as their behavior warrented.
   
     PC's are becoming so cheap and powerful, that I can not see an effective
   monopoly being maintained for long by (m)any central power(s).  The people
   will take back their own power.
   
     While granted, I'd prefer to have more privacy, I think it would be better to
   KNOW that information about me is availible as a matter of course, rather than
   THINK the information is private--when a privilaged subset actually has access.
   
     If we are going to have as much known about us as appears to be the case,
   then the only way to level the playing grounds is by letting everone have, as a
   matter of course, the SAME access, unless and until they abuse the trust
   implied by the system.
   
     If people know that their actions can be/are looked at, maybe they'll think
   twice about what they say or do.  Granted that this is a poor sub- stitute for
   a proper set of values or a good conscience(sp?), but I think few would dispute
   that many people in power could use a constant reminder that they are
   answerable to the people.
                                                Brett
   
   
---------------------------------------------

Re: Credit reporting (Paul Schmidt, RISKS-11.77)

"David A. Curry" < davy@erg.sri.com >
Fri, 31 May 91 14:52:44 -0700
   
   If you belong to TRW's Credentials Service, this is one of the "features":
   any time anyone accesses your credit report, they send you a nice little
   blurb the next day such as (from when I applied for my Gold Card):
   
     ----------------
     [... assorted introductory stuff ...]
   
     The following companies have requested and received copies of your TRW
     credit files.
   
       Date   Company Name and Address           TRW Number  Type of File
     -------- --------------------------------   ----------  ------------
     06-07-90 AMERICAN EXPRESS                    3459491    Credit
     	   4315 South 2700 West
   	   Salt Lake City, UT
   
     [... stuff about where to call with questions ...]
     ----------------
   
   TRW Credentials costs $35/year.  For this, you get credit card protection,
   unlimited copies of your credit report, and the little notes above.  They also
   let you enter in all your financial data, so that it can be retrieved
   electronically with your "access number" (like a PIN) by banks and such when
   you apply for loans - so you don't have to fill out the forms (in theory
   anyway, I never tried it).
   
   Before anyone re-starts the discussion about the rip-off/non-rip-off nature of
   this service, it should be pointed out that:
   
   - Any time you are denied credit, you are entitiled by law to a *free* copy
     of your credit report from the credit bureau(s) which supplied the
     information to the lender.  You can then dispute any incorrect info.
   
   - You can get a copy of your credit report at any time from any credit
     bureau for (usually) $8 and some assorted pieces of I.D.
   
   - You only get the little blurbs when someone accesses your TRW file.  TRW
     is not the only credit bureau in the country, although it's certainly one
     of the largest.  I believe Equifax and EDS offer similar services, though.
   
   So, unless you want the credit card protection and the little blurbs when
   your file gets looked at, the service is a rip-off.  Me, I like the little
   blurbs, find the service fairly well-run, check my credit report once or
   twice a year, and don't mind paying the $35.
   
   --Dave Curry
   
   

Credit Reporting

< WHMurray@DOCKMASTER.NCSC.MIL >
Sat, 1 Jun 91 09:27 EDT
   
       [similar comment...]
   
   Isn't the market great?  On the other hand, require them to do by law what they
   are already prepared to do for a fee?  Guaranteed to cost you more.
   
   In an information society it is possible for the credit worthy to obtain the
   credit in minutes that was, in an earlier time, available only to the most well
   established in days or weeks.  In part because of this, we all enjoy higher
   levels of commerce and a standard of living that would not have come about with
   out it.  When we look at the effects of all of this on privacy, we should not
   forget the reason that we put the system in place.  Yes, WE; the collective we.
   We did it; it was not done to us.  In remedying the difficulties, we must
   careful not to overlook or destroy the benefits.
   
   
---------------------------------------------

Previous Issue Index Next Issue Info Searching Submit Article


Report problems with the web pages to Lindsay.Marshall@newcastle.ac.uk.
This page was copied from: http://catless.ncl.ac.uk/Risks/11.78.html
COPY!
COPY!
Last modification on 1999-06-15
by Michael Blume