University of Bielefeld - Faculty of technology | |
---|---|
Networks and distributed Systems
Research group of Prof. Peter B. Ladkin, Ph.D. |
|
Back to Abstracts of References and Incidents | Back to Root |
This page was copied from: http://catless.ncl.ac.uk/Risks/18.08.html |
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
In addition to the security bug found by Drew Dean, Ed Felten and Dan Wallach in March, there is another way to run native code from a Java applet, which will require a separate fix to the current versions of Netscape (2.01 and Atlas PR2) and Sun's Java Development Kit (1.01). Both this attack and the previous one rely on an applet being able to create an instance of the same security-sensitive class, but each does so using an independent hole in the bytecode verifier. Once an applet is able to run native code, it can read, write, and execute any local file, with the permissions of the browser. These attacks do not require any additional preconditions, other than viewing the attacker's web page with Java enabled. They can be done without the user's knowledge. Summary of Java bugs found so far ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date Found by Fixed in Effects --------- ------ ---------- ------- Oct 30 95 DFW not fixed Various - see in HotJava ftp://ftp.cs.princeton.edu/reports/1995/501.ps.Z Feb 18 96 DFW/SG 1.01/2.01 Applets can exploit DNS spoofing to connect to machines behind firewalls Buffer overflow bug in javap Mar 2 96 DH 1.01/2.01 win32/MacOS: Applets can run native code UNIX: Ditto, provided certain files can be created on the client Mar 22 96 DFW not fixed Applets can run native code Mar 22 96 EW not fixed If host names are unregistered, applets may be able to connect to them Apr 27 96 DH not fixed Applets can run native code There was also a separate bug in beta versions of Netscape 2.0 which, in hindsight, would have allowed applets to run native code. [DFW = Drew Dean, Ed Felten, Dan Wallach http://www.cs.princeton.edu/sip/News.html SG = Steve Gibbons http://www.aztech.net/~steve/java/ DH = David Hopwood http://ferret.lmh.ox.ac.uk/~david/java/ EW = Eric Williams http://www.sky.net/~williams/java/javasec.html Dates indicate when the problem was first posted to RISKS, except for Eric Williams' bug, which has not been posted.] For bugs in Javascript, see John LoVerso's page http://www.osf.org/~loverso/javascript/ These include the ability to list any local directory (apparently fixed in Atlas PR2), and a new version of the real-time history tracker. Additional information on the March 2nd absolute pathname bug is now available from http://ferret.lmh.ox.ac.uk/~david/java/ Recommended actions ~~~~~~~~~~~~~~~~~~~ Netscape (2.0beta*, 2.0, 2.01): Disable Java (on all platforms except Windows 3.1x), and if possible Javascript, using the Security Preferences dialogue in the Options menu. Note that the section on security in the Netscape release notes is not up-to-date. Netscape (Atlas PR1, PR2): As above, except that the options to disable Java and Javascript have moved to the Languages tab in the Network Preferences dialogue. Appletviewer (JDK beta*, 1.0, 1.01): Do not use appletviewer to load applets from untrusted hosts. HotJava (alpha*): Sun no longer supports HotJava alpha, and does not not intend to fix any of its security holes until a beta version is released. David Hopwood david.hopwood@lmh.ox.ac.uk
There are many articles in RISKS wondering about various aspects of the safety of increasing automation in aircraft. We should remember that increased automation can also help to avoid accidents. Nancy Leveson (RISKS-17.21) pointed out some incidents in which TCAS (the Traffic Avoidance and Collision Alert System) seems to have helped avoid collision accidents. In RISKS-17.89, I referred to a report that the US Navy was speeding up acquisition of digital flight control systems for F-14s to help avoid loss-of-control accidents. There is another example in recent news. From newspaper reports, it seems as if the safety of flight IFOR 21, a US Air Force T-43A (a type of B737-200) which crashed on approach to Dubrovnik, Croatia, could have been enhanced by more modern navigation equipment. This flight carried US Secretary of Commerce Ron Brown. The USAF does not normally release results of its investigations to the public. So I have summarised facts from a short *Washington Post* article, two articles from *Flight International*, and a *New York Times Service* feature article that appeared in the *International Herald Tribune* on Monday 29 Apr 1996. I also include some of my own observations and (a link to) the approach plate (map) for the Dubrovnik NDB Runway 12 approach that the aircraft was in course of executing. This summary is available in the Compendium `Computer-Related Incidents and Accidents...' under my WWW home page at http://www.techfak.uni-bielefeld.de/~ladkin/ Peter Ladkin [Archivists reading RISKS ten years from now will hope that it is *still* there then. But Peter's home page is clearly a moving target anyway, changing at flying speeds. Grab it while it's hot. PGN]
Citing cost overruns and schedule delays because of mismanagement, the Federal Aviation Administration canceled a $475 million contract that was intended to help the airlines get extremely precise navigational data from satellites. The FAA announced the contract in August with Wilcox Electric Inc. of Kansas City, a subsidiary of Thomson-CSF SA of Paris. The FAA said the action was part of a new strategy of cutting losses early, rather than struggling along for years with mounting delays and cost overruns. _The Minneapolis Star Tribune_, Saturday, April 27, 1996, p. A8.
I'm writing this up as a reminder re: the importance of not jumping to conclusions, especially when computers are involved. My hope is that perhaps this story will one day be recalled by a reader facing a similar situation, and a better outcome will result. Recently, one of our staff members found an incomplete outgoing message in her e-mail that she did not write. The appearance was that another staff member might have been forging an e-mail message from her machine, but been interrupted. The creation date was several days before the message was found, making it appear that the creation date had been faked; there were two copies of the message in two locations, making it appear that the message had been copied; the writing style was not that of the machine's owner; the issue mentioned was one that had a bone of contention between the staff members days earlier. In my initial investigation, I reported that without full security and a reliable audit trail it wasn't really possible to prove that such a forgery had been done, but that this did look suspicious. Relationships within our group had been strained, and this was enough to start accusations flying. The climate in the office, already toxic, grew poisonous. Yesterday, I put my finger on the actual smoking gun: I hadn't known it initially, but the two staffers were using Macintosh file sharing and one was mounting the entire hard disk of the other as a server volume using the owner password, which gave complete read and write access to the entire volume. This was counter to our written guidelines on safe file sharing. The mail program in question, poorly-written, stores full path names to keep track of the folder where it creates outgoing mail. One system's internal hard disk had been renamed from the default "Macintosh HD" and the other had not - I'm sure most readers know by now where this is going. The mail program found its outgoing messages folder on the wrong machine and the message in question was created there. I replicated this exact sequence of events on two other machines yesterday while my supervisor watched. I haven't explained the duplicate copies on the remote machine or why the message was seen earlier, but this isn't necessary in order to convince me that this "forgery" was really an accident. Upon reflection, there was no clear motive, and everyone agrees now that if the attempt had been to defame a fellow employee, a much more coherent and effective job would have been done. Public apologies have been made. We're switching e-mail clients and our staff members will be expected to follow guidelines for safe file-sharing. I personally was far, far too willing to blame a staff member before exhausting every technical possibility. Now we've got to work at building a trusting work environment again, and hope that this can be put behind us, but I don't think the accused staff member will quickly forget being the target of an unfair accusation where the only evidence existed on a computer, and my confidence in my own judgment has been badly shaken. The RISK is of attributing to malice what is easily attributable to poorly-written software, incomplete understanding of the system your co-workers use, and failure to follow good security practices. There is a further risk that a pre-existing climate of suspiciousness can push an investigation of an anomaly into a witch hunt. The last RISK is that it is too easy for someone like myself, inexperienced at managing conflicts between staff members, to lose his objectivity and start becoming more suspicious of a person than of a computer. As a regular RISKS reader, I should have known better. "Paul R. Potts" potts@cancer.med.umich.edu>, Technical Lead, Health Media Research Lab, University of Michigan Comprehensive Cancer Center
I can't wait for AOL to try this with other languages. The "am" in "America" means the same in Turkish as what they're objecting to in Scunthorpe. Perhaps they need to transform themselves to "Omerica On-Line"! The first time I ever saw the offending word in print, it was spelled with an "o" anyway; this was in the early 60s in a script in _Plays and Players_, a British magazine devoted to contemporary drama. They presumably did that to get by the English drama censor (the Lord Chamberlain, whose idiosyncratic prohibitions provided the British intelligentsia of the time with nearly as much amusement as their modern American counterparts get from the CDA).
It seems somewhat ironic that AOL's substitution of an 'o' for a 'u' in the name of Scunthorpe has just translated the problem -- turning an embedded English vulgarity into a French one, the same one just to add to the fun (what my dictionary refers to delicately as: "Vulg. Sexe de la femme.") AOL, or any other organisation applying such a policy on a global basis, runs the risk of altering a name to avoid a perceived English vulgarism and creating as much, if not more, offense in the language and culture of origin. This is complicated by the fact that these second-order vulgarities would likely be unrelated to the English obsession with body parts and functions and thus be much harder for the censor to detect. Flavian Wallis
Fans of British humour will recall that "Norwich" encodes an indecent suggestion. Perhaps AOL should be censoring that fine city's name also. Greg [Lots of other comments were received as well, including the common gerundive name of a German town, a few classical typos that made things worse in miscorrection, and more. This thread is certainly not unraveling. PGN]
[Long message excerpted for RISKS ...] The real message is: 1) Use PGP. 2) If you receive PGP e-mail and find it to be offensive, and only that, give up corresponding with the person in question. 3) (which is the crux of the matter) Does the e-mail contain a serious threat or form of intimidation?, in which case it is covered by the normal strictures against intimidation, does it contain slanders or libels (N.B. English law doesn't consider a statement of fact to be a slander or libel, so I don't in all honesty think this law counters freedom of speech, even though its heavier users are Tory MPs)? and finally is it in some other way criminal?, e.g., a chain letter to deprive you of all your cash, which I admit I tend to dismiss as being the "buyer"'s problem to worry about - AOL aren't protecting you from ANY of these rather larger problems. and the message to America On Line is that they should probably provide something on the lines of PGP, however ropy and possible to crack, and stop fooling around with people's e-mail and free speech. IBM must have had this problem, because under VM it was possible to monitor only your own id, not the id of someone you managed - a lesson I think Unix should probably have learned too (as a result a VM manager could claim not to be responsible for the nasty activities of the users). [...] Phil Overy
The AOL censorship item in RISKS-18.07 reminds me of Paul Hilfinger's story about the time the Carnegie-Mellon University computer-center staff was ordered by the CMU administration to change the name of the "finger" command (despite it being an ARPAnet standard). They changed "finger" to "where" and also took it upon themselves to change Paul's name to "Paul Hilwhere" (initially intending it to be temporary). Paul actually approved of the change (as a kind of gentle protest), and it remained that way for some time. Jim Horning
> French forenames must be taken from saints and/or antiquity ... Funny about how French mores are constantly misrepresented in the Anglo-Saxon world - usually to present France as a kind of Soviet Union where everything is government-regulated. The informal guideline until a few years ago was that the name should either be from antiquity or appear in some calendar. This includes religious calendars (not only Catholic), but also e.g. the Revolutionary calendar, where months appear with names like Nivose and Ventose, and days have names like Cerise (cherry). I believe all the law said was something like local authorities should exert proper care; the calendar approach was a pragmatic way to resolve borderline cases. In last resort, of course, the courts would decide. In the past twenty years or so pragmatism has reigned, what with the large immigrant populations (if you go to a playground you'll hear parents calling "Leila" and "Tarik" all around - not typical Catholic saint names), the influence of American movies (every other child seems to be called Jessica or Gregory), and a general move towards more laxity in tolerating individual tastes. This clearly has some limits, and I don't think that `Brfxxx...' would pass any more there than in Sweden. -- Bertrand Meyer, ISE Inc., Santa Barbara This posting adheres to the SELF-DISCIPLINE guidelines for better USENET discussions. See http://www.eiffel.com/discipline.
>> of what good security practices would proscribe! Not only do they suggest >> ^ Even more interesting is the fact that I use a proportional spaced font for reading news/editing etc and so the `^' character appeared under the word `practices' rather than `proscribe'. I stared at it for ages before realising what had happened. The RISKS? Don't make assumptions about how your intended audience will view information. We constantly have problems with this - people using MS-Word as an e-mail format; non mime-compliant mail readers; compression/encoding schemes that are UNIX-unfriendly - the list is endless. andy piper
9a. Don't believe that a person cannot have a batch job or background process running on their machine, which could send e-mail to another address (with or without "fudged headers"), while the person is, for example, 5 miles offshore of Costa Rica catching sailfish. 9b. Don't believe that a person cannot have an automated answering service that sends a reply to "a piece of e-mail" stating something similar to the following: "Sorry I cannot send a detailed reply to your 'piece of e-mail', because I will be very busy in meetings until the end of today". When the actual person is "playing hookey" or still chasing those sailfish in Costa Rica. Mike Marler, Information Technology, Georgia Tech, Atlanta, Georgia 30332-0715 mike.marler@oit.gatech.edu
I think PGN forgets a very important one: 10. Never accept to handle the account of some other person or having access to his/her computer. The reason is obvious: should that person commit any crime s/he can always argue that since you had access to his/her account you could have forged or impersonated him/her. If that person is your boss and more powerful and reputed than you are, chances are s/he will be believed and you will charge with the responsibility for the crime. Note: I'm not implying this could in any way whatsoever have happened in the Oracle case. But with computers it is now so easy to deny the authenticity of any document and imply it is a forgery that loading responsibilities on someone else's shoulders is trivial. jr
Has the Assistant DA never heard of cellular modems? John C. Rivard http://www.mcs.net/~jcr/ jcr@mcs.com
Adelyn Lee having Larry Ellison's password is significant. If she had his password, she may or may not have used it to send the e-mail. But as a RISKS matter, the technological evidence is not conclusive, even beyond some of the ways PGN mentioned. For example, the cellphone records don't even have to be forged. Unless they checked all possible cellphone numbers in the area that he could have used, they can't rule out his having both his own cellphone and another line with a cellular modem. Or having had the e-mail sent via an at job or other automatic scheduled mechanism. Also, was the clock set to the same time for their computer and the cellphone records? Conceivably, he also could have had someone else with access do it (but allegations of conspiracy, like talk of black helicopters, don't do much to establish the credibility of a particular line of argument). Since the technical evidence is not conclusive, it comes down to a credibility and bigger lawyers issue. I'm concerned about the precedent that may be established if judges rule on an issue while the parties have not explored all reasonable arguments about the technology. We'll see what happens with the perjury charge. -S. Simona Nass simona@panix.com [Clock synch problems also noted by tye@metronet.com (Tye McQueen). PGN]
Ah, this warms my heart, to know that we're learning lessons here: a person is charged with perjury, because computer records have shown that the computer records can't be trusted....
Information Infrastructure Project Harvard University Commercial Internet Exchange Association (CIX) Internet Society COORDINATION AND ADMINISTRATION OF THE INTERNET Workshop Announcement and Call for Papers This is a first announcement and call for papers and proposals for a workshop to be held at the John F. Kennedy School of Government, Cambridge, MA, USA, on September 8-10, 1996. The workshop will address issues in the international coordination and management of Internet operations. We are seeking papers which address the economic, organizational, legal and technical issues in migrating to internationally sanctioned, industry-supported processes and institutions. What should a fully internationalized Internet look like, and how do we get there from here? Topics to be explored in the workshop and resulting publication include: - policy and management issues concerning network addresses domain names routing policy settlements interconnect points intercontinental connectivity quality of service standards - legal and institutional structures for supporting core Internet functions - institutions and policies needed to ensure the future scalability and extensibility of the Internet - technical and implementation issues presented by heterogeneous national information policies - the need for data in support of Internet planning, including issues of how data should be collected and maintained - coordination needed for the deployment of new technology - international crisis management for the Internet Although the Internet is already substantially privatized, certain essential functions -- notably the domain name registry, network number assignment, and the routing arbiter -- are still funded by the U.S. Government. Unlike the local telephone exchange, these integrative services are managed by third parties, contributing to an open competitive environment which has helped enable rapid growth of the Internet. Rapid growth, commercialization, and internationalization are putting stress on current institutions and procedures -- which are neither self-sustaining nor officially recognized at the international level. The National Science Foundation plans to phase out support for core administrative services and for international connections, just as it has withdrawn support for production-level backbone services. Conflicts over tradenames and number assignments suggest that international legitimacy is needed for domain name and network number management. Beyond support for essential functions, there are many practical and policy issues where some greater degree of coordination or institutional leadership may be desirable. For example, how can the implementation of new technology and protocols be expedited? What common definitions and guidelines should exist to describe network performance? Should the functions performed by current Internet institutions (such as the Internic, RIPE, APNIC, and the IANA) be brought into a more robust international infrastructure, and if so, how? To what extent are multilateral peering arrangements and settlements needed to encourage continued growth and competition in the Internet access industry? The conference will engage scholars, practitioners and policy makers in examining and discussing these issue. It will bring together stake-holders, academics and individual leaders within and beyond the Internet community to help define the future institutional infrastructure of the Internet. Workshop papers will be revised and edited following the workshop for publication by MIT Press as part of the Harvard Information Infrastructure Project series. Potential participants are encouraged to submit papers that can be developed and revised for publication (copyright assignment is not required). Please send an abstract by June 15, 1996, for review by the program committee. Please direct papers, proposals, and requests for future mailings to: James Keller Information Infrastructure Project Kennedy School of Government, Harvard University 79 JFK Street Cambridge, MA 02138 617-496-4042; Fax: 617-495-5776 jkeller@harvard.edu The Harvard Information Infrastructure Project is a project in the Science, Technology and Public Policy Program at the John F. Kennedy School of Government, with associated activities at the Kennedy School's Center for Business and Government and the Institute for Information Technology Law and Policy at Harvard Law School. This event and publication are funded in part by a grant from the National Science Foundation, Division of Networking and Communications Research and Infrastructure.
This page was copied from: | http://catless.ncl.ac.uk/Risks/18.08.html |
COPY! | |
COPY! |
by Michael Blume |