University of Bielefeld - Faculty of technology | |
---|---|
Networks and distributed Systems
Research group of Prof. Peter B. Ladkin, Ph.D. |
|
Back to Abstracts of References and Incidents | Back to Root |
This page was copied from: http://catless.ncl.ac.uk/Risks/15.13.html |
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
The Wednesday 13th October 9 o'clock evening news on BBC1 TV in the UK featured a new report from the UK HSE (Health and Safety Executive) which includes concerns about the software in the new Sizewell B nuclear plant in the UK. Darlington in Canada was also mentioned and Prof. David Parnas made a brief appearance voicing his concerns on the issues in this area. The report says the "software is of a high quality" but expresses doubts that the safety levels claimed can be achieved. Does any RISKS reader have a full reference for the report? Jonathan Bowen, Oxford University
A thread in sci.med.pharmacy was discussing warning labels on medication. It seems that the computer systems that most pharmacies use now print the labels that the system thinks should be placed on each prescription. These labels are the warnings such as "May cause drowsiness". The risk is obvious. But a new California law may make matters worse. The high cost and low availability of pharmacists has led to the creation of Pharmacy Technicians. These positions require, at most, a junior college certificate. These technicians may fill prescriptions under a pharmacist's supervision, but may not dispense medication. Pharmacists I know will override the computer and put the stickers that they know to be correct on. The techs will probably not know, and the quality of the supervision will not replace the quality of a trained pharmacist. Also, how many hackers could pass up the opportunity to spread around a few erroneous "For rectal use only" stickers on certain managers prescriptions :-) Bob Campbell campbelr@cup.hp.com
It takes a tough hide to be a reporter. My note in RISKS-15.06 on the RISKs of trusting e-mail generated a modest flurry of responses pointing out some errors and asking for some clarifications. Since all who sent me notes could just as well have sent them directly to RISKs, I am assuming that even though they want parts of the record set straight they don't want to do so publicly. Although I know my "sources" on the scene were convinced of the accuracy of what they told me, by the time the information passed into their hands it seems that some of it was slightly garbled, although not badly enough to weaken the essential point of the whole incident. (Not to detract from the seriousness of the situation, I do have to note that none of the email pointing the following out was digitally signed or authenticated.) 1. The secretaries of the principal figures involved in the resignation message did not take the *contents* of the message seriously. However, they took its existence seriously, believing it indicated there had been a serious compromise of the security of their office information systems. The incident itself has "undermined the confidence" of the clients of the University's computer systems. (This is new information which I think makes the incident actually of more interest than the original version.) 2. The FBI was not called in and the students (three, not five) were not expelled, but reprimanded and (temporarily, according to another source) denied their e-mail privileges. I suspect here my sources were telling me actions that were being contemplated but upon which a final decision had not yet been made. 3. It was not really fair to mention the name of the mail client the students used, since that is irrelevant and not the source of the problem: it is the SMTP protocol and the inherent insecurity of the internet that give the opportunity. One doesn't even need to have an e-mail program to forge an e-mail message: telnet works just fine. 4. "PEM" stands for "Privacy Enhanced Mail." See internet RFC's 1421, 1422, 1423 and 1424; implementations for a variety of platforms are available. (temptation to insert commercial here resisted.) PEM provides digital signatures, authentication, and encryption. 5. "6,000" of course is not the size of the student population at the U of W, but some could have read my note that way. The number of students, all of whom are eligible for an e-mail account, is about 41,000. "6,000" (the number now is actually closer to 7,000) is the number who have signed up for it so far. Ted Lee, Trusted Information Systems, Inc., PO Box 1718, Minnetonka, MN 55345 612-934-5424 tmplee@tis.com
For *years*, attmail.com has been running a brain-dead X400 mailer (Are there any other kinds of X400 mailer?) that regularly causes a flood of bounce messages whenever anyone posts to a newsgroup that is gatewayed to a user on their system. The name of Brad Hicks (poor guy) must be known by just about every poster to comp.org.eff.talk, for instance. Well, this is merely annoying, but not _risky_... so where does the risk come in? Look at this (short) extract from the (long) bounce message: From: mc/S=E-Mail_Services_Administrator/OU=0205216@mhs.attmail.com Date: 25 Sep 93 01:19:19 GMT Proper name: First name : Brad Last name : Hicks X.400 O/R name attributes: Country : US ADMD : ATTMAIL PRMD : MASTERCARD Org unit 1 : 0205925 Postal address: Street : nnnnn Lackland Road Town : St. Louis, MO nnnnn Fax : 314-275-nnnn Isn't this fascinating? We now know Brad's employer, home address, home fax number, and some magic number that would no doubt be of use to someone up to no good if they knew what it meant :-) Anyone want to bet this info came from internal attmail.com information and not from something public like a finger file? Maybe Brad should have a word with attmail.com ... (or change to almost any other non-X400 system in the world that doesn't seem to have these repeated problems...) Graham PS I'd have mailed this to Brad but I've *never* in my life managed to mail to anyone with an X400 address. But hey - this *is* the standard that the British Government tells all its organisations to adhere to. Jeez. [Brad kept trying to get added to the RISKS list, but I could never get mail through to him reliably. The other annoying thing about ATTMAIL is that I had to create a macro with a single mailing for each recipient; it would not accept multiple addressees. I don't know if that is still the case. (This item certainly seems timely after Phil Agre's contribution in RISKS-15.12.) PGN]
Prompted by the message by Mr. Brunnstein in RISKS-15.11, I thought RISKS readers might find it interesting to know that a "Computer Crime" act is currently under review by the Italian Parliament (to the best of my knowledge, one of its two branches has approved it). I have enclosed a tentative translation as well as the original text of the article related to "malicious programs". The whole act also addresses other issues such as unauthorized entry or possession of access codes, etc. A bit of personal comment about the wording of the article: while the Swiss text focuses on the concept of (lack of) "authorization" in order to define the illegal behaviour of both people and programs, there is no such "keyword" in the Italian proposal. Moreover, the provision against "programs ... having the effect of ... damaging a computer or ... the programs or data contained in ... it" is even more RISKy. It seems to me that, besides viruses, most of the bugs usually found in SW could fall under this article, since the unintentionality is not regarded as a matter of exclusion from the punishment. Having followed the VIRUS-L forum for a while, I am perfectly aware that it is almost impossible to draw a satisfactory border between malicious programs and legitimate ones, but I feel that this text misses the point by more than a bit. Comments welcome. Luca Parisi. --Proposed Translation-- --Disclaimer: Please note that I'm not a lawyer, so people in the legal field might find it inaccurate; feel free to correct it if needed-- Article 4 of the [Proposed] computer crime act: [material deleted] "Article 615-quinquies of the Penal Code (Spreading of programs aimed at damaging or interrupting a computer system). Anyone who spreads, transmits or delivers a computer program, whether written by himself or by someone else, aimed at or having the effect of damaging a computer or telecommunication system, the programs or data contained in or pertaining to it, or interrupting in full or in part or disrupting its operation is punished with the imprisonment for a term of up to two years or a fine of up to It. L. 20,000,000." --Original Text-- --Excerpt from: Camera dei Deputati - Disegno di Legge presentato dal Ministro di Grazia e Giustizia (Conso), recante "Modificazioni ed integrazioni alle norme del codice penale e del codice di procedura penale in materia di criminalita' informatica." - N. 2773-- Art. 4 [omissis] "Art. 615-quinquies. - (Diffusione di programmi diretti a danneggiare o interrompere un sistema informatico). - Chiunque diffonde, comunica o consegna un programma informatico da lui stesso o da altri redatto, avente per scopo o per effetto il danneggiamento di un sistema informatico o telematico, dei dati o dei programmi in esso contenuti o ad essi pertinenti, ovvero l'interruzione, totale o parziale, o l'alterazione del suo funzionamento, e' punito con la reclusione sino a due anni e con la multa sino a lire venti milioni."
Flight International, 13-19 October 1993, contains the following report: (Guy Norris in LA): A succession of autopilot anomalies in Boeing 757s and 767s has prompted calls from the US National Transportation Safety Board (NTSB) for corrective actions and revised operating procedures. In a 15 June incident at Frankfurt, Germany, a United Airlines 767-322-ER ran off the right side of the runway at 130kt (240 km/h) during a landing roll-out, when the rudder made an uncommanded movement 16deg-17deg to the right, with the nosewheel about to touch down. The crew managed to deflect the rudder and curve left, but missed another aircraft by less than 90m (300ft) as it [sic] returned to the runway. "Once on the runway, the pilot reported that he regained `soft normal' rudder pedals after pressing the autopilot disconnect button twice," says the NTSB. Boeing says that it is still mystified, but adds that tests have found "...no evidence at all to link the autopilot with the rudder anomaly". It adds: "We are taking it seriously and tests are continuing." All 757 and 767 operators will be notified of updated test results by 20 October. Boeing admits that it did discover a fault which had caused anomalous displays on the mode-control panel (MCP), but it sees no connection with the rudder event. The NTSB recommends that Boeing and autopilot supplier Rockwell-Collins address `...the uncommanded movements and errors seen in Boeing 757/767 MCP displays and switching functions" [punctuation sic]. It wants the US Federal Aviation Administration to issue an airworthiness directive implementing the changes and to check on the Boeing 747-400 and Fokker 100, which have similar autopilots. It also recommends that the FAA require Boeing to "...issue a temporary Airplane Flight Manual Supplement to ensure that pilots are aware that autopilots have engaged, disengaged and changed modes and MCP display window setting without pilot input". [End report] It seems that the NTSB and Boeing are disagreeing on whether there have been incidents of uncommanded autopilot control inputs. The Independent (newspaper) reported the NTSB comments on the front page on October 12th. Their report is less accurate (even misleading) in a number of respects than Flight's, but there are some more numbers: (from The Independent, p1, Oct 12th 1993): [....] On checking United's records, the US National Transportation Safety Board found that there had been 29 instances - all but one since 1985 - in which autopilots on 757s and 767s had behaved unpredictably [meaning what? `uncommanded control input'? pbl]. Boeing says that most of these incidents involved faulty readings during flight which were corrected by the crew. [....] Investigators are mystified by the faults since the autopilot is a triple system, including two back-ups to ensure there is no failure. [they've obviously been reading up on Byzantine disagreement - pbl] [End quotes] Peter Ladkin
Flight International, 13-19 October 1993, reports (no byline): A delay between pilot selection and physical actuation of spoilers and reverse thrust has emerged as a suspected key factor in the runway overrun on 14 September of the Lufthansa Airbus A320 at Warsaw Airport, in Poland. [...] The pilots were also not kept informed by the Warsaw tower of surface-wind changes. After touching down 700m (2,300ft) from the [Runway 11] threshold [.....] the pilots selected spoilers and reverse thrust, but there was a delay of 9s before they deployed, according to the sources. Actuation of the spoilers may have been prevented by a safety system designed to prevent their deployment in flight: automatic deployment depends on the wheels spinning up at touchdown to a speed greater than 72kt (135 km/h), but aquaplaning may have prevented wheel spin-up. Reverse-thrust actuation may have been prevented by another safety system designed to stop the operation of reverse thrust in flight [pursuant to the Lauda Air accident conclusions]. An undercarriage microswitch isolates the reverse-thrust actuators until the aircraft's weight is on the wheels. If the spoilers did not deploy, hardly any of the aircraft's weight would have been on the wheels, a factor accentuated by the pilot's decision to add 20kt to the aircraft's indicated airspeed [they mean: to add 20kt to the indicated airspeed designated for an approach under normal conditions, and use this as the target airspeed for the approach] because of possible windshear on approach [this is a normal procedure, but I rely on other sources for the 20kt figure]. The extra speed would have provided additional lift, reducing still further the weight on the wheels. Another vital point is that the Warsaw tower controller does not have a surface-wind read-out, but relies on a voice update every 3min from the meteorological department [!!!!!]. As a result, the crew was given a reported surface wind of 160deg/10kt when it [sic] was a tailwind of 280deg/20kt at touchdown [they landed on Rwy 11, which has an orientation of between 105deg and 114deg]. The Polish accident-investigation bureau says that it expects this week to issue its first statement on the accident. [End quote] This seems to me like a failure of requirements specification. Weight-on-wheels, and wheels-spinning, are both secondary criteria for landing, as demonstrated categorically by the anticipated report. May I suggest a primary criterion?: Close proximity to ground, in a landing configuration, with spoilers and reverse-thrust selected by the pilot. Whether Airbus is able to maintain its reputed stance that the software is 100% correct may depend on whether they consider requirements specification to be part of `software'. Peter Ladkin
Flight International, 13-19 October, also reported that the B747-400 that landed in the drink in Tahiti *did not* yield "evidence of a fault in the full-authority digital engine-control systems on the General Electric CF6-80C2 [engines]." There's a lovely picture of the aluminum overcast slaking its thirst in the ocean. Peter Ladkin [Pun-itive measures are needed for pbl. Maybe someone has to pound "stirling" into the briny. Den-mark it well, Laertkin! Drinking in Tahiti in this manner is NOT a good idea. PGN]
Brian Kenney writes: The system would be triggered if automatic sensors - which Blair said may be subject to error - detected a disruption of key military communication links, as well as seismic disturbances, and flashes caused by nuclear detonations inside Russia. I think it prudent at this point to remind everyone of the classic "RISKS" movie "Dr. Strangelove -- or How I Learned to Stop Worrying and Love the Bomb." (Stanley Kubrick directs, Peter Sellers plays 3 roles.) Roughly, a crazed US Commander sends his wing of B-52's to bomb Russia, and then it is revealed the USSR has something like what Brian discusses above (The Doomsday Device, I beleive). They try to recall the planes, but one has a damaged encryption unit, so recall orders cannot easily be given. A simple RISKY mistake (a system designed without proper safeguards) and annihilation is made imminent. It sounds rather like a scenario which may play itself out, should this device actually exist -- which, terrifyingly, it now may! This movie is a *must-see*, particularly in this RISKS context. [Old stuff, but this item is included for our younger readers. PGN]
In RISKS 15.11, Larry Detweiler wrote the following about import of crypto: No defense article may be imported into the United States unless (a) it was previously exported temporarily under a license issued by the Office of Munitions Control; or (b) it constitutes a temporary import/in-transit shipment licensed under Section 123.3; or (c) its import is authorized by the Department of the Treasury (see 27 CFR parts 47, 178, and 179)." According to the ITAR, "Permanent imports of defense articles into the United States are regulated by the Department of the Treasury." My understanding is that Category XIII of the munitions list, which includes encryption technology, has been removed from the munitions import list. Thus, as near as I know, there are no controls on permanent imports of encryption technology. Dorothy Denning
I highly recommend a paper by Victoria Bellotti and Abigail Sellen of Xerox EuroPARC entitled "Design for privacy in ubiquitous computing environments". Bellotti and Sellen demonstrate a structured method for articulating a wide range of privacy problems that can arise with a potentially invasive computer-based technology. The method is not nearly complete, since it does not really address the larger institutional context of such systems, but it is valuable nonetheless. It has been published as part of the ECSCW '93 Proceedings: Proceedings of the Third European Conference on Computer-Supported Cooperative Work - ECSCW'93. G. Michelis, C. Simone & K. Schmidt (eds.),Kluwer Academic Publishers, Dordrecht, The Netherlands pp 77-92. It is also available (or at least used to be) as Xerox Cambridge EuroPARC Technical Report EPC-93-103, 1993. Phil Agre, UCSD
The October 9 issue of the weekly magazine The Economist contains an extended article of about 20 pages on the topic of "New Frontiers of Finance: The Mathematics of Markets", or in other words, about what computers can and can't do in terms of predicting the behavior of stocks and currencies. While I'm not an expert in that field, the article appeared to me to be very well reasoned and well balanced, and I certainly recommend it. If you read this too late to buy the issue at a newsstand, you can also mail-order this article alone from The Economist Newspaper Group, Inc., Reprints Dept., 111 W. 57th St., New York NY 10019. The price this way is $3.50 US (plus tax in CA/DC/IL/MA/NJ/NY/VA/Canada), the same as the cover price of the issue. Mark Brader, Toronto utzoo!sq!msb, msb@sq.com
From the October 1993 issue of the Bulletin of the American Academy of Arts and Sciences: The University of Michigan Press has released _Risk_ (paperback, $14.95) edited by Edward J. Burger, Jr., of the Georgetown University Medical School and Institute for Health Policy Analysis. This volume of essays (an expanded version of the Fall 1990 issue of _Daedalus_) grew out of an exploration of how Americans think about risk -- especially risk to health -- and how their views shape their personal and political behavior. It touches on such topics as theories of risk perception, the many ways in which public views about risk have colored government activity, the role of the civil justice system in regulating public risk and compensating for its consequences, management of risk in sexually transmitted diseases, the quality of media reports on health risks, and the influences of science and scientists on litigation and public policy. To order _Risk_, call (313) 764-4394.
US Citizens who attend this may discover a hidden risk: The US State Dept still has a ban on US citizens entering Cuba without prior approval of the state dept, which is usually granted only to state dept employees, cubans expatriates visiting relatives, and "approved" researchers. Violation of the ban could result in federal prison. Amazingly, the ban applies solely to this small Caribbean island that is a favored vacation spot for Canadians and Europeans. No such ban exists for countries we actually have military interest in, such as Somalia, Iraq, and Bosnia. andrew mossberg * network manager * symbiosis corporation * miami florida usa (305) 597-4110 * fax: 597-4002 * editor, south florida environmental reader
> ... That would lead to > OVERDOSES being applied at different sites to different patients. This is not true. Radiation therapy is violent because it has to be. Doctors know that in order to (have the best chance of) curing a patient, they have to do a certain amount of violence: too much violence and the chance that the treatment will kill the patient goes up, but too little violence and the chance that the cancer will kill the patient goes up. Doctors do not do wilful damage to their patients. Too low a dose is just as bad as too high a dose; dying of cancer is just as unpleasant as dying of radiation burns. Sean
I remember a glowing _Discover_ magazine article describing how perfect the Hubble mirror was.... The Hubble mirror was tested, but as I recall it was *management* which balked at the necessity of building a second test jig because of a few anomalous measurements. (And don't forget they didn't have access to the DoD test equipment!) Since everyone seems to believe that testing the output of this device is so simple, do you mind telling us how you're going to model the *human* in the calibration? Can you use a side of beef, or would that introduce too many errors? Would you need to surgically implant sensors into cadavers? Would the fact that they (obviously) don't have circulating blood, aren't breathing, etc., make a difference in your measurements -- a living patient is always moving around, to some extent. I'm not arguing that testing shouldn't be done. I'm simply pointing out that it involves quite a bit more effort than sticking a radiation meter on the platform and turning on the power. Bear Giles bear@cs.colorado.edu/fsl.noaa.gov
The new Oxford University Press journal "High Integrity Systems" will explore issues related to systems that either require high integrity or exhibit it, or both. o Systems may require high integrity because failure leads to critical losses. Typically these are systems that affect human safety, environmental stability, finance, or some aspect of the societal infrastructure. o Systems may exhibit high integrity because they have evolved mechanisms to survive the kinds of shocks their environments may offer. Although redundancy, fault tolerance, and reliability are important properties of many high integrity systems, the journal is not solely about these features. Its focus is broader. Its aim is to provide an interdisciplinary platform for the examination of mechanisms that allow systems to accomplish their objectives in the face of both anticipated and unanticipated obstacles. Strategies used by both computer-based and naturally occurring systems are of interest. Papers focusing on the design, analysis, and explication of such mechanisms and strategies are solicited. Analyses of the social and legal consequences of losses of system integrity are also of interest. Original research, case studies, and tutorial and survey articles are all welcome. The journal is supported by an international editorial board. All papers are fully refereed. Electronic submission is encouraged. Publication of accepted papers within six months of submission is anticipated. For author information contact either: Editor: A. D. McGettrick, University of Strathclyde, Glasgow G1 1XH, UK adm@cs.strath.ac.uk Associate Editor: R. J. Abbott, California State University, Los Angeles, CA 90032, USA rabbott@calstatela.edu, OR The Aerospace Corporation, PO Box 92957, Los Angeles, Ca 90009, USA abbott@aero.org Announcements and news items are welcomed by: P. A. Bennett, Centre for Software Engineering Ltd., Scunthorpe, UK
This page was copied from: | http://catless.ncl.ac.uk/Risks/15.13.html |
COPY! | |
COPY! |
by Michael Blume |