This is an archived page and is no longer updated.
Abstracts of RVS Publications by Topic
Contents
Foundations of System Correctness and Failure
This research area involves a look at some of the philosophical and
logical issues behind system correctness - what it means, for example -
and system failure, which is somehow the `converse' of correctness.
Reasoning about failure tells us much about what it is for a system to
be correct, and aircraft accidents are amongst the most carefully
investigated incidents of system failure. Consequently, the writings here
are divided into Some Plane Prose and
Foundations. Both are treated in the
book, Success and Failure of Artifacts. A second book describes our
formal method for failure analysis, Why-Because Analysis (WBA).
-
Notes on the Foundations of System Safety and Risk
Peter B. Ladkin
- Abstract
This is a book-length manuscript of approximately 150pp. Chapter 1 sets
out the ontology of systems: what they are, and so on. Chapter 2 repeats
Nancy Leveson's definitions of safety, reliability and risk, these being
possibly the most thoroughly researched so far. Amongst these is an attempt
to define risk by identifying hazards and computing various probabilities
around hazards. Chapter 3 shows by means of a simple counterexample how
this strategy fails for five different definitions of hazard, including
Leveson's (this counterexample first appeared in Hazards, Risk and
Incoherence, RVS-Occ-98-01, but has been extended here and the
presentation cleaned up). Chapter 4 introduces some distinctions amongst
types of predicates, and Chapter 5 applies the terminology developed so
far to a game-of-golf example suggested by Peter Mellor. Chapter 6 introduces
the conceptual machinery of causality needed for Causal System Analysis
(CSA), presented in Chapter 7 by means of a very simple pressure tank
example. Why-Because Analysis (WBA) is presented very cursorily
in Chapter 8, by means of the Warsaw accident analysis, dealt with in
more detail in RVS-Bk-98-01 and RVS-Occ-97-09. Chapter 9 surveys some
important social aspects of judgements about technological risk.
This manuscript was originally written as a set of notes for a tutorial,
and it shows. Most of the work is original, with the notable exception of
the last chapter on social aspects (however, even here the presentation and
commentary is original).
Various additions and expansions are planned, notably
a chapter on parts taxonomies, a more full treatment of WBA, and expanding
the survey of seminal work on social aspects of engineering risk.
[ PDF Version (1.65Mb) |
PS Version (7.83MB)]
RVS-Bk-00-01, 30 June 2000.
-
Why-Because Analysis: Formal Reasoning About Incidents
Peter Ladkin, Karsten Loer
- Abstract
This book explains the Why-Because Analysis Method of incidents with
complex systems, WBA, working through an aviation example, with formal proof.
Informal applications of WBA to some well-known aviation accidents involving
computers are also given.
This is a draft manuscript and is for private viewing only. All rights
are reserved by the authors.
[Book]
-
The Success and Failure of Artifacts
Peter Ladkin
- Abstract
I study some of the philosophical and logical questions in the
foundations of engineering, such as:
what is the logical situation of specification, requirements,
design, software, and digital hardware, with respect to `the world' -
or even with respect to each other, as components of an artifact?
What can we prove mathematically, and what features stand no chance
of being mathematically proven? What does it mean for a system to
fail, and how do we tell? Conversely, what does it mean for a system
to be `correct'? How can we explain or reason about failure? What
are the features of the world that are relevant in considering these
questions? What ontology do we need; what reasoning power do we need;
what is the logical form of our assertions about systems? All these
are natural questions to a philosophical logician, which have not
in my view really begun to be answered. I offer views and arguments
for these views in this series of essays, which include many of
those appearing as individual reports below.
(HTML)
- The Pre-Implementation Safety Case for
RVSM in European Airspace is Flawed
Peter B. Ladkin
- Abstract
The argument in the Pre-Implementation Safety Case for RVSM [1]
demonstrates at most that RVSM operations without ACAS meet Target Levels of
Safety. It does not demonstrate that RVSM operations with ACAS-equipped
aircraft meet Target Levels of Safety; neither can a correct argument for this
assertion be reconstructed from the document. Since most aircraft operating in
RVSM are required to be ACAS-equipped, the safety case does not establish the
safety level of RVSM operations as they are currently conducted and for the
foreseeable future.
This short note provides a proof of this thesis, and includes some
commentary on selected citations from the safety case document.
[HTML (18K)]
RVS-Occ-02-03, 29 August 2002.
- ACAS and the South German Midair
Peter B. Ladkin
- Abstract
I describe the ACAS system and its operation. In so doing, I point
out its significant weaknesses in dealing with some three-aircraft
conflicts; indeed, it cannot resolve certain three-aircraft conflicts.
I argue that this makes it impossible to devise uniform procedures
for crew to react to ACAS Resolution Advisories. In particular, I
note that in the Southern German midair collision on 1 July, 2002,
a simple cognitive mistake put one participating crew into a
cognitive situation that they could not reasonably resolve; indeed,
in which it may have been rational to act contrary to international
recommendations. Whether or not this actually happened, the scenario
highlights a flaw in ACAS, in that one mistake by a non-participant
in the RA engenders an irresoluble cognitive model in one crew.
[HTML (52K)]
RVS-Occ-02-02, 12 August 2002.
- Review of the Cushing
Grammar
Martin Ellermann and Mirco Hilbert
RVS-Occ-01-02, 23 July 2001.
- Abstract: In his book Fatal Words
(U. Chicago Press, 1994), Steven Cushing proposed a formal grammar for a
significant fragment of official Air Traffic Control communication with
pilots, as specified in the U.S. FAA publications Air Traffic Control
Handbook and Aeronautical Information Manual. This grammar was partially
incomplete (it did not define the syntax for headings and bearings, or for
radio frequencies, for example) and partially incorrect (it allowed certain
phrases that were not in use, and disallowed certain others in use). This
note details the problems with individual rules in the Cushing Grammar.
[PDF Version
(240K) |
PS Version (226K) ]
- Building a Parser for ATC
Language
Martin Ellermann and Mirco Hilbert
RVS-Occ-01-05, 18 February 2002.
- Abstract: This paper describes a parser for the ATC
grammar defined in Developing an ATC Grammar using the Review of the
Cushing Grammar, RVS-Occ-01-03. The parser was built using
the compiler development tools Flex and Bison.
[PDF Version
(368K) |
PS Version (516K) ]
- Building a Corpus for Cockpit
Voice Recorder Transcripts
Oliver Hölz and Thomas Hettenhausen
RVS-Occ-01-06, 23 October 2001.
- Abstract: This paper describes an adequate corpus
format for cockpit voice recorder transcripts, making it easier
automatically to process them, including annotating (for keywords, and
especially for speech-act and dialogue-act classification), searching
and comparing. A script in the language PERL processes ASCII CVR
transcripts and renders the corpus format in XML.
[PDF Version (274K) |
PS Version (443K) ]
- Computational Analysis of Airplane
Cockpit-Voice-Recording Transcripts
Andre Döring, Mark McGovern and Jan Sanders,
RVS-Occ-01-07, 11 November 2001.
- Abstract:
The paper shows how a "semantic parser" may be built for a specific technical
natural language. The language is that of pilot-ATC communications in
commercial aviation. Five cockpit voice recorder transcripts from accidents
were used as the data on which the parser was developed. The intent is to
obtain information about the topic of each transcribed spoken phrase. An
attribute grammar is used, based on keyword analysis. The algorithm devised
was run by hand over the examples, and tuned until it was largely successful,
success being defined as a high recognition rate. Recognition rates on
examples varied between 53% (Nagoya) and 77% (Lufthansa 2904, Warsaw).
[PDF Version (218K) ]
-
Fuel Flammability, Flight Path Coercion and
Technical Security Analysis
Peter B. Ladkin with Frank Taylor,
RVS-J-01-01, 17 September 2001.
- Abstract:
Technical safety analysis and technical security analysis have much in common.
We consider the events of September 11, 2001, in which commercial aircraft
with a high fuel load flew into the World Trade Center, and suggest that
research could profitably be revisited on some technical options for
mitigating such disasters, notably fuel flammability.
[HTML ]
-
Developing an ATC Grammar using the
Review of the Cushing Grammar
Martin Ellerman and Mirco Hilbert,
RVS-Occ-01-03, 28 June 2001
- Abstract:
A correct and relatively complete grammar is presented for a significant
fragment of U.S. pilot/ATC communications. Besides correctness and
completeness, among the requirements for the grammar were that it should be
in a "standard" form, humanly readable, easily extensible, and enable machine
parsing and processing. It supports an effort to analyse automatically CVR
transcripts to identify speech and dialogue acts.
[PDF Version (364K) |
Postscript Version
(294K)]
-
How to Generate Fault Trees from Causal Influence Diagrams
Peter B. Ladkin, Bernd Sieker and Joachim Weidner,
RVS-Occ-01-04, 19 June 2001
- Abstract:
We show how to generate fault trees algorithmically from Causal
Influence Diagrams (CIDs), and report on the implementation of such a
facility in the CID-drawing tool cid2dot
[PDF Version (333K) |
Postscript Version
(705K)]
-
An Example of Everyday Risk Assessment
Peter B. Ladkin, RVS-Occ-01-01, 2 February 2001
- Abstract:
I consider an everyday example of a putative risk on which guidance
is offered in various ways to affected company employees and the public,
namely the presence and use of mobile phones on filling station forecourts.
This leads me to separate the risk analysis from the safety policy
considerations, and to consider the interaction of various common
principles for formulated policy concerning everyday risks. I single
out one principle for particular attention, namely, Physical Causal
Justification, which says that if a safety measure R is suggested for
a situation S, then that is because there are circumstances in which
violating R in situation S leads to an accident.
(PDF Version (145K) |
Postscript Version (137K))
-
EMI, TWA 800 and Swissair 111
Peter B. Ladkin and Willi Schepper,
RVS-Occ-00-01, 10 October, 2000
- Abstract:
We refute Elaine Scarry's contention, published in the New York Review of Books
in September and October 2000, that external electromagnetic fields can have
been major contributors to the accidents to TWA Flight 800 and Swissair Flight
111. The refutation for TWA 800 cites NASA research done in support of the
investigation. The refutation for Swissair 111 is new here.
(PDF Version, 336K |
Postscript Version, 202K)
-
Memorandum to the Transport Subcommittee on
the Costs of NERC, 26 November 1998
Peter B. Ladkin
26 November 1998
- Abstract:
Great Britain is building what was billed as the most advanced
En-Route Air Traffic Control system in the world at the National
En-Route Center (NERC) in Swanwick, Hampshire, to control traffic in
the London Flight Information Region (FIR), which covers southern
British airspace.
The costs of forward development of the system, up to formal
acceptance, were GBP 216.9 million, according to NATS's annual report,
compared with a budget of GBP 475 million. This memorandum to the
Chair of the Transport Subcommittee, Mrs. Gwyneth Dunwoody, was
published in the House of Commons, Session 1998-99,
Environment, Transport and Regional Affairs Committee Third Report,
on The Future of National Air Traffic Services, on 21 January 1999,
pp52-55.
(HTML, 19K)
-
The Risks of Hubris
Inside Risks, Communications of the ACM 41(12), Dec. 1998
Peter B. Ladkin
28 October 1998
- Abstract:
This guest column was invited by Peter Neumann for his Inside Risks in the
December 1998 CACM. Using Ted Hughes's wonderful translation of Ovid's tale
of Phoebus the sun-god and his son Phaethon's foolish attempt to drive the
sun chariot, I suggest an analogy with some modern attitudes to the use of
computer technology in critical applications. Ted Hughes died on 28 October 1998,
and this essay is dedicated to his memory.
(HTML, 6K)
-
EMI and TWA800: Critique of a Proposal
Peter Ladkin
10 April 1998
- Abstract:
Elaine Scarry proposed that high intensity radio frequency fields (HIRF),
possibly generated by the U.S. Military, be investigated as a possible
causal factor in the TWA800 accident. Her argument appears to be very
poor. I state it, and critique it.
(HTML, 16K)
-
The Ariane 5 Accident: A Programming Problem?
Peter Ladkin
20 March 1998
- Abstract:
I consider three papers on the Ariane 5 first-flight accident, by
Jézéquel and Meyer suggesting that the problem was
one of using the appropriate system design techniques; by
Garlington on the culture of flight-control and avionics software
engineering; and by Baber on use of pre- and post-conditions in
programming. I conclude that Jézéquel and Meyer's
argument is fundamentally mistaken; and that although Garlington's
(reconstructed) critique of them is broadly correct, a conclusion closer
to that of Jézéquel and Meyer is most appropriate,
following from a different argument from that which they gave.
(HTML, 24K)
-
Evidence to the Transport Subcommittee on
NERC/NSC, Wednesday 11 March, 1998
Peter B. Ladkin
8 March 1998
- Abstract:
Great Britain is building what was billed as the most advanced En-Route
Air Traffic Control system in the world at the National En-Route Center
(NERC)
in Swanwick, Hampshire, to control traffic in the London Flight Information
Region (FIR), which covers southern British airspace.
The £350+ million system has run into problems, experiencing
successive delivery delays. The contractor has succeeded in getting it to
run on some 35 workstations but not the 200 for which it was designed.
I was invited to give oral evidence before the Transport Subcommittee
of the House of Commons on the issues: (a) how long would an audit
of the NERC system, whose purpose would be to determine whether the
current NERC system could ever be made to work, take? (b) Sir Ronald
Mason's assertion that dual-sourcing, i.e. awarding the New Scottish
Center contract (for the other en-route center in Britain, to cover the
Scottish FIR) to another systems supplier/integrator is a `basic principle'
of safety-critical system development.
(HTML, 34K)
-
Letter to the Transport Subcommittee on
NERC/NSC, Monday 17 November, 1997
Peter B. Ladkin
17 November 1997
- Abstract:
Great Britain is building what was billed as the most advanced En-Route
Air Traffic Control system in the world at the National En-Route Center
(NERC)
in Swanwick, Hampshire, to control traffic in the London Flight Information
Region (FIR), which covers southern British airspace.
The £350+ million system has run into problems, experiencing
successive delivery delays. The contractor has succeeded in getting it to
run on some 35 workstations but not the 200 for which it was designed.
I wrote to the Transport Subcommittee on 9 November, 1997, detailing my
concern with the project, based on public evidence concerning technical
problems, and with plans for its completion as briefed by National Air
Traffic Services (NATS), the NERC client, in September 1997. I was
briefed by specialists on the new Canadian ATC system, CAATS, and
the new Australian ATC system, TAAATS, and included their comments in
my letter.
(HTML, 28K)
-
Towards "Why...Because"-Analysis of Failures
Karsten Loer
20 February 1998
- Abstract:
This thesis introduces the Why...Because Analysis (WBA) method of
explaining failures causally. From a brief history of an incident, WBA
first aims at inquiring after and identifying the significant
acts/states/events/non-events that partake in a causal explanation, and
then proving rigorously in the formal logic Explanatory Logic (EL)
that the explanation found is correct and relatively sufficient.
WBA along with formal proof in Lamport's hierarchical style is presented
by means of a small running example.
(G-ZIPped PS, 501K)
-
The Crash of Flight CI676,
a China Airlines Airbus A300, Taipei, Taiwan, Monday 16 February, 1998:
What We Know So Far
Peter B. Ladkin
Latest version: 19 February 1998
- Abstract:
The publically-available facts concerning the accident to CI676 are
given, and it is suggested where they point - and where they don't point.
Specifically, they cannot show a repeat of the Nagoya accident, because
of mandatory A300 system modifications since that accident. This
article will be continually modified as information is gathered.
(HTML, 41K)
-
Risks of Technological Remedy
Inside Risks, Communications of the ACM 40(11):160, Nov. 1997
Peter B. Ladkin
10 September 1997
- Abstract:
This guest column was invited by Peter Neumann for his Inside Risks in the
November 1997 CACM. I argue by example that because of the resilience of
some forms of human error, attempted technological solutions make miss the
mark. The first example compares the misidentification of Flight
KAL007 in 1983 with that of IR655 in 1988, and the second considers
CFIT accidents, and their avoidance through GPWS and EGPWS.
(HTML, 6.5K)
-
The Crash of Flight KE801, a Boeing B747-300,
Guam, Wednesday 6 August, 1997:
What We Know So Far
Peter B. Ladkin
11 September 1997
- Abstract:
The publically-available facts concerning the accident to KE801 are
given, along with references to sources, and the features of the accident
that arise from these facts are discussed and analysed, along with
various pieces of reasoning and commentary from the press.
(HTML, 59K)
-
Analysing the 1993 Warsaw Accident With a WB-Graph
Michael Höhl and Peter B. Ladkin
8 September 1997
- Abstract:
We analyse the final report of the 1993 Lufthansa A320 accident in
Warsaw using the WB-Graph method, and conclude that some fundamental causal
factors are missing from the report's conclusions, although mentioned
in the body of the report.
(HTML, 30K)
-
Controlled Flight Into Terrain: What is Being Done?
Peter B. Ladkin
21 August 1997
- Abstract:
Controlled flight into terrain (CFIT) remains the
greatest cause of fatalities in commercial air transportation.
What is it, and what is being done to reduce its incidence?
In the era of the lowest-ever aviation fatality rates, eliminating
CFIT altogether poses a new challenge.
(HTML, 46K)
-
Flying An ILS or Localiser Approach - An Example
Peter B. Ladkin
25 August 1997
- Abstract:
In order to understand how an aircraft flying under instrument
flight rules approaches an airport for landing, one must understand
standard instrument approach procedures and documentation.
I introduce the procedures and charts used when flying an ILS
or localiser instrument approach to an airport, as examples of
a precision and non-precision approach respectively.
I follow the ILS and localiser-only
approaches to Won Pat International Airport in Agana, Guam.
There are other traditional non-precision approach types
on which I do not comment, such as VOR,
VOR/DME, NDB and RNAV. There are also approach types using new-generation
avionics technology which are in development but which I don't
discuss: Microwave Landing System (MLS) and GPS approaches in particular.
(HTML, 61K + GIF images)
-
Traditional Aviation Radio Navigation: An Introduction
Peter B. Ladkin
20 August 1997
- Abstract:
I describe the basics of aircraft land-based radio navigation with
VOR and NBD reception equipment.
(HTML, 39K + GIF images)
-
To Drive or To Fly - Is That Really The Question?
Peter B. Ladkin
24 July 1997
- Abstract:
Some statistical comparisons were made of fatal accidents while flying
on a commercial jet and while driving on rural interstates published
during 1989-1991. They largely use data from the mid 70's through the
80's, and show that the risk of dying on a commercial
jet flight was uncorrelated with the length of the flight, and that
for trips of longer than 303 miles, flying was safer than driving
if you were in the statistical group of `safe' drivers. The age and
the type of data warrant some comments on the effectiveness of statistical
comparisons, in
particular how commercial passenger flying may have changed in the 90's.
-
Electromagnetic Interference with Aircraft Systems: why worry?
Peter B. Ladkin with colleagues
13 July 1997
- Abstract:
There are worries about suspected
electromagnetic interference with aircraft systems from electronic devices
used by passengers. Some first-hand incident reports from colleagues
are included. The phenomenon seems to be hard to pin down -- colleagues
explain why. It may be that the current regulatory situation affects
reporting and investigation of suspected incidents. Finally, I suggest some
ways in which the regulatory environment could be changed to aid
investigation.
(HTML)
-
Formalising Failure Analysis
Thorsten Gerdsmeier, Peter Ladkin and Karsten Loer
4 June 1997
- Abstract:
We consider systems which are artifacts designed to fulfil a goal. Most
computer systems are of this sort. Criteria for success have been
well-studied in the domain of specification and verification. Failure
has been less well studied. The study of failure is normally exemplified by
case histories. In other work, we have devised a method for determining and
representing causality in case histories, called the WB-graph method.
Methods need tools.
We describe a tool for building and analysing WB-graphs built in DATR,
a language devised for computational linguistics.
(HTML)
-
A Tool For Building and Analysing WB-Graphs
Thorsten Gerdsmeier
3 March 1997
- Abstract:
We describe a tool for building and analysing WB-graphs built in DATR,
a language devised for computational linguistics. Building the WB-graphs
in DATR allows a number of consistency checks easily to be programmed.
(HTML)
-
Analysing the Cali Accident With a WB-Graph
Thorsten Gerdsmeier, Peter Ladkin and Karsten Loer
14 January 1997
- Abstract:
We analyse the Cali accident to American Airlines Flight 965, a
Boeing 757, on 20 Dec 1995. We take the events and states from the
accident report, construct a WB-graph (`Why?...Because...-graph) in
both textual and graphical form of the 58 events and states, and
compare this representation favorably with the Findings section of
the original report. We conclude that the WB-graph is a useful
structure for representing explanations of accidents.
(HTML)
-
News and Comment on the AeroPeru B757 Accident
Peter Ladkin
8 November 1996
- Abstract:
Not only was this accident very disturbing: the third fatal accident
to a B757 in 10 months after a 13-year perfect safety record, but
news reports and statements in the early days were highly misleading.
The aftermath of the Birgenair accident had sensitised me to these
unfortunate social processes.
The purpose of
this essay
is thus twofold: technically, to provide
continuing timely factual information on , and some analysis of,
the AeroPeru 603 accident on October 2, 1996; and, sociologically,
to provide a history and commentary of what was said about the crash,
when, and by whom.
-
Some Dubious Theses in the Tense Logic of Accidents
Peter Ladkin
27 September 1996
- Abstract:
Some authors propose to use forms of linear time temporal logic
to describe and explain accidents. Common themes are:
(1) that the 'original causes' of an accident precede all other
relevant events; (2) how to pick the temporal point of view from
which the accident can be explained; (3) that it is a criterion of
successful explanation that an assertion of the accident event
should be logically derivable from assertions of preceding
causally-relevant events; (4) that the causal relation is expressed
by the tense-logical 'leads to' operator. Some authors have
suggested (5) that an accident is explained by exhibiting the proof
of the weak fairness of the accident-action. I have explained
elsewhere (ad 2) how to pick a temporal point of view, and (ad 4)
that causality is not suitably expressed by 'leads to'.
I show here that the other three theses are implausible; that (1)
may be rescued only at great cost to intuition and common practice;
that (3) may be modified in the direction of a deductive-nomological
model of explanation; and that (5), if at all worthy, has a much
simpler equivalent.
(HTML)
-
Explaining Failure with Tense Logic
Peter Ladkin
10 September 1996
- Abstract:
Accidents involve events, and events causing other events, laid out in
time until the final disaster. Boolean logic alone is insufficient to
describe such failure, although its use in the shape of fault trees
has been very successful. Reasoning about objects and their properties
is the domain of predicate logic, and reasoning about events and their
temporal relations is the province of tense logic. I show how the
reasoning involved in accident analysis may be formalised in tense
logic, using the past-time operator since. I also show how
a fault tree may be derived from a temporal logic description without
use of frame axioms. I critique proposals to use purely future
operators, and to identify the causality relation with a purely
temporal logic operator. Finally, I consider the important concepts
of interactive consistency and tight-coupling not to be worth
formalising inside a tense logic.
(
HTML)
-
Formalism Helps in Describing Accidents
Peter Ladkin
4 September 1996
- Abstract:
I analyse the 'probable cause' of the 1979 Chicago DC-10 accident
using a minimal formalism, and find an omission. The omission is
contained in the body of the report. McDonnell Douglas's statement
barely addresses the point contained in the omission. I conclude
that formalism helps in accident reporting by enabling simple
consistency and omission checks.
(HTML)
-
Comments on Confusing Conversation at Cali
Dafydd Gibbon and Peter Ladkin
7 February 1996
- Abstract:
There are interesting linguistic aspects of the conversation
between Air Traffic Control and American Airlines flight 965
shortly before the crash of AA965 near Buga, Columbia, on
20 December 1995. We discuss two features, lexical ambiguity
and direction-naming conventions, as well as two linguistically
incongruent features of ATC radio transmissions.
(This abstract is almost longer than the article itself.)
-
The X-31 and A320 Warsaw Crashes: Whodunnit?
Peter Ladkin with links to others
Revised version 28 January 1996
- Abstract:
Recent discussions in the RISKS newsgroup asked
whether there was a software or design problem involved in
the crash of the X-31 research plane in January 1995. Discussants
also introduced the Warsaw crash of the Airbus A320 in order to
talk generally about system faults and mishaps. I proceed with
an analysis of the concepts used, give my list of causal factors,
and explain how I got them, and (hopefully) clarify what bits
systems have, how they have problems, and whether it makes sense
to blame the usual suspects for mishaps. I include information
from people who really know.
(HTML)
-
Reasons and Causes
Peter Ladkin
31 January 1996
- Abstract:
I discuss the art of reasoning about reasons and causes
in accident investigations, in the belief that some formal
understanding may improve our ability to analyse. I discuss
a recent paper of Moffett, Hall, Combes, and McDermid in
which they suggest a formal theory for reasoning about
causes, and apply it to a partial analysis of the
Lufthansa A320 accident in Warsaw.
(HTML)
- Also concerned with aviation are
-
-
Sociology of Scientific Knowledge Is Not
Radically Sceptic
Peter B. Ladkin
RVS-Occ-02-01, 24 January 2002.
- Abstract
My main thesis is that a view of science often taken by
sociologists of science, considered to be radically sceptic by Jean Bricmont
and Alan Sokal, is compatible with a naively realist view of science, indeed is
justified as a description of everyday scientific activity and its
products. There are, however, views of science incompatible with a naive
realism, say, those proposed by so-called postmodern critics, in which science
is just another form of political activity and a connection to any "facts" is
illusory. Such views are harder to argue against than it might seem, even
though they might appear to be extreme, and obviously false. The two views,
one compatible with naive realism and the other not, should be distinguished.
It is also worthwhile summarising some sources of the views of scientists of
what science is. I include a resumé due to Larry Laudan.
[HTML ]
-
On Classification of Factors in Failures and Accidents
Peter B. Ladkin
16 July 1999, extended 04 August 1999
- Abstract
I consider a classification of complex-system components into
digital, mechanical, electrical, wetware and procedural systems,
which are not mutually exclusive and have themselves subcategories.
Other schemes are also possible, such as one based on the
different mathematical and other intellectual tools required.
Subsystems do not generally form a static hierarchy, and may be identified
according to causality, provenance, or habit. Whether failures are classified
as component failures or interaction failures depends on the subsystem
identification chosen. There may well be no indivisible, "unit"
components of a system. Features which are errors should be distinguished
from features which are infelicities. I classify "computer-related
factors", discuss some human factor classification schemes, and mention
mode confusion as an error factor which has gained prominence with the
increased use of digital avionics systems. All this is illustrated and
argued by means of aviation accident examples.
[HTML, English, 97K]
-
Hazards, Risk and Incoherence
Peter Ladkin
15 June, extended 28 June, 1998
- Abstract:
I consider some standard engineering definitions of hazard, accident and risk,
and show on a simple example how the definitions of hazard and risk are
deficient.
(HTML, 35K)
-
Formale Bescreibung von DATR
Thorsten Gerdsmeier
5 January 1998
- Abstract:
The question of deriving a provably-correct implementation of the pure
inheritance language DATR is considered. First one must have a semantics,
with which to prove the implementation consistent. We consider the
denotational semantics of William Keller, critique it and present our
own improved version. Although it is in principle possible to derive an
implementation direct from the denotational semantics, it is unlikely to
be efficient, for reasons we enumerate. We thus define a Natural Semantics
(à la Kahn), a collection of rewrite rules. We have validated this
semantics by implementing it in the theorem prover Isabelle (see
Executing DATR Programs with a Prover, below)
and comparing the output or the prover with the intuitively intended
output of a variety of simple DATR programs. This Isabelle implementation
is briefly described, but its technical description will appear in
further work. It is relatively efficient.
[ Thesis, PS, 275K gzipped ]
-
Executing DATR Programs with a Prover
Thorsten Gerdsmeier
5 December 1997
- Abstract:
A natural semantics (à la Kahn) was written for programs in DATR,
a pure inheritance language originally devised by Gazdar for use in
computational phonology. This semantics was implemented as an `object logic'
in the metalogic of the Isabelle theorem prover (from Paulson). DATR programs
are thus evaluated by theorem-proving in Isabelle. This has two advantages:
firstly, comparing this method with the output of other DATR compilers is
a way of checking the correctness of the semantic rules; second, assuming the
soundness of Isabelle, the evaluation is guaranteed correct (by contsruction)
for the rules, and therefore if there is confidence in the soundness of
the rules, this is a guaranteed-correct compiler for DATR, against which
other implementations may be compared for correctness.
[ Demo ]
-
Abstraction and Modelling
Peter Ladkin
16 April 1997
- Abstract:
Engineers talk of abstractions and
models. I define both, consistently with the way that
engineers, computer scientists, and formal logicians use the
terms. I propose that both abstractions and models have
what Searle calls an agentive function, namely that both
models and abstractions are models/abstractions of something,
for a purpose
It follows that both abstraction and modelling are most fruitfully
thought of as ternary relations (or, when the purpose is forgotten,
binary). A canonical example of abstraction arises in considering
logical form. The criterion used to determine logical form are
used mutatis mutandis to define abstraction (the purpose of
logical form is given: to explicate a sentence's/proposition's role in
inference; the purpose for a general abstraction remains to be
selected by the abstracting agent).
One may therefore consider a generalised, Wittgenstinian
notion of logical form as
what both abstractions and models have in common with their targets.
Abstraction is closely related to the idea of describing, a relation
between sentences and states of affairs; in fact it can be considered
to be a precise version of description.
(HTML)
-
Logical Form as a Binary Relation
Peter Ladkin
16 April 1997
- Abstract:
I consider the notion of logical form, and argue that there are
considerable difficulties with defining a canonical logical form
of sentences and propositions: for given A, finding a
unique B such that A has the logical form of B.
The purpose of logical form is unproblematically served by
allowing for a given A varied B such that
A has the logical form of B. I give a criterion for
the relation of having the logical form of in terms of
the notions of syntactic transformation and
inferential equivalence.
(HTML)
-
On Needing Models
Peter Ladkin
22 February 1996
- Abstract:
In his paper, Limits of Correctness in Computers,
Brian Cantwell Smith proposed that program verification couldn't
work, because computer systems are based on models of the `real world'
and there's no general theory of how these models relate to
what they're modelling. I think he's wrong, both in his arguments
and his conclusions.
This paper sets out my reasons.
-
Correctness in System Engineering
Peter Ladkin
2 April 1995
- I analyse what is meant by correctness and failure of systems,
using techniques from philosophical logic. I contradict a
well-known claim of Fetzer. I look at the role played by
specification, and simple arguments concerning failure,
and try to clarify the boundary between system properties
that may be proved and those that cannot be, by using two
examples, one due to
Boyer and Yu.
Draft Paper
[ HTML |
Postscript |
Dvi ]
Back to Contents
Commentary and Resources on Commercial Aviation Accidents
This project aims to collect resources and reliable technical
comment on aspects of
computer-related commercial aviation safety on the World Wide Web. The
hypertext compendium below is the result. Accidents
which have a computer-related component are commented on, and where possible
the text of the final report is made available. Aviation organisations concerned
with safety are linked, including both administration and research. Electronic
resources such as the RISKS newsgroup are consulted; opinions from experts are
solicited and included. Social aspects are not given as much emphasis as
technical, although many would argue they play a major role.
-
Computer-Related Incidents with
Commercial Aircraft
Overview by Peter Ladkin with links to others
Ongoing
- Abstract:
There are now many commercial air transport aircraft whose primary
flight controls are operated by computers, to which pilot commands
and other data are input. Such aircraft are the
Airbus A319/320/321/330/340 and the Boeing B777. The use of
computers for such safety-critical tasks as airplane flight
control means that the safety of air travel has become an
interdisciplinary venture which involves computer scientists
as well as airplane and avionics engineers, procedure and
human-factor specialists, control-system specialists,
navigation and software engineers, and management experts.
This is a commented
Hypertext compendium
of reports, papers and pertinent short observations,
such as those from the
RISKS-Digest,
the
Forum on Risks to the Public in Computers and Related Systems,
an on-line journal supported by the ACM Committee on Computers and
Public Policy and edited by Peter Neumann,
which has established a reputation as the premier
electronic source for news and discussion of computer-related risks
since the mid-1980's.
Back to Contents
This project, joint with Prof. Harold Thimbleby of Middlesex University
in London, attempts to produce clear, correct, consistent and complete
operating manuals for human-machine interaction tasks. Our basic point of
view is that the machine-part of the task behaves as a state machine, and
the human-part follows a `user manual', which consists of a procedural
description of the state machine in some form or other, according to intended
use. Formal specification of such interface can lead to automatic generation
of operating manuals guaranteed to fulfil certain formal desiderata such as
consistency and completeness. Since the quality of operating manuals in
commercial aviation is widely acknowledged to be a problem, we hope this
work will support the forward-generation of such manuals. The reverse-engineering
of such manuals has also been addressed in one publication, the
Analysis of a Technical Description of the Airbus A320 Braking System.
-
From Logic To Manuals Again
Harold Thimbleby and Peter Ladkin
published in IEE Proceedings - Software Engineering, June 1997
- Abstract:
A republication of From Logic To Manuals, below,
necessitated by the manual processing of the paper at the IEE, which
resulted in egregious typographical mistakes in the SEJ 11(6) (November 1996)
published version, and a comment from us as to how much easier it would
have been to ensure correctness had the journal worked with our HTML version
that was output from the program. Ironically, the republication was marred
by a mistake made by the (new) typesetters, which changed certain fonts.
A corrected republication of the June 1997 IEE Proceedings on Software
Engineering was sent to all subscribers. It took us almost more work to
get the publishers to publish a correct version as it did to write the
original paper, thus proving our point in spades -- unfortunately only
to us.
-
From Logic To Manuals
Harold Thimbleby and Peter Ladkin
5 March 1996, revised 5 September 1996
- Abstract:
We demonstrate a simple language that combines specifications and
manuals. This shows: first, that a manual can be automatically
reconstructed from a logic specification that is effectively identical
to the original logic (up to ambiguities in natural language); second,
that such an automated process can help detect errors.
The paper contains examples of use of the language in manual
generation. It was created by running a source version (with the
bulk of the text as meta-comment, and
with input but no output to any examples) through the language
processor. The output for the examples was automatically generated
and inserted by the processor.
Here are the
source file and the
output version with examples.
-
Analysis of a Technical Description of the Airbus
A320 Braking System
Peter Ladkin
28 October 1994
- I analyse the description of the operation of the Airbus A320
braking systems contained in the Flight Crew Operating Manual.
I use the predicate-action diagrams of Lamport to express and
to complete the description, and give reasons why such a more
rigorous expression is preferable.
Here is a
PDF paper which appeared in
High Integrity Systems 1(4),
as well as a short paper
Airbus A320 Braking as Predicate-Action Diagrams
which appeared in the proceedings of the 5th GI/ITG Workshop on Formal Description Techniques for
Distributed Systems, University of Kaiserslautern, June 1995.
-
A proper explanation when you need one
Harold Thimbleby and Peter Ladkin
10 May 1995
- Abstract:
Quality program design has received considerable attention from the
software engineering community. Quality user manual design has received
considerable attention from the human computer interaction community.
Yet manuals and systems are often independently conceived, and thus
do not well complement each other. This paper shows one method of
easily obtaining correct and complete user manuals guaranteed to
correspond with the system they document.
Postscript -
DVI
Appeared in People and Computers X,
Proceedings of the BCS Conference on Human-Computer
Interaction, HCI'95,
ed. M. A. R. Kirby, A. J. Dix and J. E. Finlay,
Cambridge University Press 1995.
Back to Contents
Analysis of Communication in Parallel and Distributed Environments
This project, joint work with Dr. Barbara Simons of IBM Santa Teresa Labs,
analyses the communication of between parallel loop processes. Aims
are to provide combinatorial criteria for successful parallel programs and
processes - those that run without deadlock - and algorithms to serialise
clusters of such processes, the important `recombination' step in parallel
compilation, after decomposition and processor assignment.
Work so far is collected in one monograph, which will be extended and
completed in early 1997.
-
Practical Static Methods for Exact Deadlock
Prediction in Message Passing Concurrent Processes
Christina Claudia Wuzik
9 February 1998
- Abstract:
This work extends results of Ladkin and Simons in
Static Analysis of Communicating Processes.
A mathematical method is found for reducing the quantity of information
that must be examined to detect deadlock in loop processes. Two
observations of Simons concerning the special case of a communication
graph which is a tree are proved. Finally, it is observed that the
problem may be formulated as iterated matrix multiplication (Mackworth)
and algorithms for parallel matrix multiplication are discussed.
(G-ZIPped PS, 175K)
-
Static Analysis of Communicating Processes
Peter Ladkin and Barbara Simons
22 April 1995
- Abstract: We define the Message Flow Graph, a graph that
shows the point-to-point communications between code for concurrent
processes. Trouble is, communication usually isn't point-to-point --
on different executions of the same send, the message may be
received by execution of a different receive statement in
the target
process. For simple non-terminating processes, so-called
loop processes, we show how to build a Message Flow Graph
from the source code, and when one can be built. If the processes
deadlock, such a graph may not be built, so our construction and
associated mathematics may be used for compile-time deadlock
analysis. Two mathematical conditions, the Numerical Matching
Condition and the Sequence Condition are particularly
important, and needed to prove the algorithm correct. We show how
to perform the analysis for communications of many sorts: synchronous
(rendezvous), asynchronous, and multiway synchronous - the basic
analysis is performed for two-process synchronous communication and
then extended.
This 139-page draft version of the manuscript
does not yet include
certain of our theorems on polynomial-time deadlock detection, which
will be added shortly. Read it at your peril - but please tell us
what you think!
The monograph has been accepted for publication
in the series Lecture Notes in Computer Science from
Springer-Verlag.
Postscript
-
Static Deadlock Analysis for CSP-Type Communications
Peter Ladkin and Barbara Simons
April 1995
- Abstract:
We present two tests for analyzing deadlock for a class of
communicating sequential processes. The tests can be used for deadlock
detection in parallel and distributed programs at compile-time, or
for debuggin purposes at run-time. They can also be used in
conjunction with an algorithm we have for constructing valid
execution traces for this class.
This paper is a resumé of Chapter 5 of
the monograph for
Responsive Computer Systems, ed. D.S. Fussell and M. Malek,
Kluwer, 1995.
Back to Contents
This project concerns the development and application of Lamport's
TLA and the specification language TLA+. It includes both algorithm
verifications, and applications such as specification and analysis of
aircraft systems and joint human-machine tasks in aviation.
I work with various colleagues using TLA, including
Leslie Lamport of Digital Systems Research Center in Palo Alto, California
and Jean-Charles Grégoire of Laboratoire
INRS-Télécommunications in Montréal, with whom we
conducted the subproject Specification and
Verification in TLA of RLP1, the data link layer protocol of TDMA mobile
cellular phone systems. The result of our work may be found below in
Safely Sliding Windows by Dirk Henkel.
Henkel will complete his TLA verification by end 1998.
-
Using the Temporal Logic of Actions: A Tutorial on TLA Verification
Peter Ladkin,
17 June 1997
- Abstract:
This tutorial on TLA verification starts by motivating the syntax and
the logic of TLA by means of a natural deduction system -- for
hierarchical proofs in TLA are (inverse) natural deductions -- and
works through a TLA specification and verification using the buffer
example from Formal But Lively Buffers.., RVS-RR-96-07.
The explanations are need-driven - TLA concepts are introduced only
where necessary for working through the example - and there's a little
hand-waving here and there for the same reason.
Postscript, gzip-ed, 169K -
DVI, gzip-ed, 59K
-
Spezifikation eines 20 l-Perfusionsbioreaktor in TLA+
Lutz Sommerfeld,
11 July 1997
- Abstract:
This work is the Diplom thesis (in German) corresponding to
the next paper. It describes the TLA+ specification
of a real-time hybrid system with vague control laws, namely a 20-liter
biological fermentation reactor in the Biotechnology section of the Faculty of
Technology. This specification distinguishes itself from other specification
problems in hybrid systems by the fact that the operation of the fermenter
follows vague laws, and also sometimes fails (due to contamination, but
also sometimes because it just doesn't work). This work was accomplished
within the
BioSpec Project.
[ DVI 80K, gzipped |
PS 219K, gzipped ]
- Beschreibung eines vagen Echtzeit-Hybrid-Systems in TLA+
Lutz Sommerfeld, Peter Ladkin,
17 June 1997
- Abstract:
Diese Spezifikation beschreibt ein technischen Gerät, für
das keine physikalisch oder mathematisch genaue Beschreibung vorliegt.
Es liegt lediglich eine vage, zum Teil umgangssprachliche
Beschreibung der Zusammenhänge vor, da die genaue
Prozeßparameter für die Durchführung der Kultivierung
unbekannt sind und deren Bestimmung unnötig ist. Darin
unterscheidet sich die Aufgabenstellung vom
,, Steam Boiler Control Problem``. Die vorgestellte Lösung
erfolgt auf einer möglichst hohen Abstraktionsebene, so daß
die Prozeß-Umwelt-Modellierung als auch doe Theorien zur
logischen Form von artifacts berücksichtigt werden
konnten. Als Spezifikationssprache wurde TLA+ gewählt.
Postscript
-
Safely Sliding Windows
Dirk Henkel
5 May 1997
- Abstract:
A formal TLA proof is given that the sliding windows protocol with
linear arithmetic implements the safety property of a reliable, lossless,
asynchronous FIFO queue. In order to make the verification as routine
as possible, the proof is divided into four parts with
three additional specifications intermediate between
the requirements specification
and the sliding windows specification.
Specs + proofs:
[ DVI 30K, gzipped |
PS 116K, gzipped ]
Commented Specs only:
[ DVI 12K, gzipped |
PS 59K, gzipped ]
-
Formal but Lively Buffers in TLA+
Peter Ladkin
7 January 1996 - New Version
- Abstract: I specify abstract and concrete buffers in
TLA+, the specification language of Lamport's Temporal Logic
of Actions, and provide a fully formal, complete proof in TLA
that the one implements the other. The proof is written in
Lamport's hierarchical style. The paper may be read as both
a tutorial and a commentary on the use of TLA.
Postscript -
DVI
- The
LaTeX
source (200K uncompressed) could be useful as one
person's view on how to write `literate proofs' in TLA using
Lamport's proof style
pf.sty.
The source also needs the Lamport style files
tla.sty and
abbrev.sty to compile.
- Accompanying
the paper are copies of an older version of the proof to
depth 4:
Postscript -
DVI
- depth 6:
Postscript -
DVI
- depth 8:
Postscript -
DVI
obtained from the original using pf.sty commands,
which show the higher-level structure of the proof.
-
Lazy Caching in TLA
Peter Ladkin, Leslie Lamport, Bryan Olivier, and Denis Roegel
18 January 1996 - Final Version
- Assertional methods, formalized by TLA, are used to specify
and verify the correctness of a cache coherency algorithm
of Afek, Brown, and Merritt. The algorithm is described
informally in
-
Verifying Sequential Consistent Memory Problem Definition
Rob Gerth
April 1993
Postscript
which should be read first.
Postscript -
DVI -
LaTeX
-
Lazy Cache Implements Complete Cache
Peter Ladkin
7 January 1996
- The formal TLA proof that the Lazy Caching Algorithm implements
a simplified algorithm called the Complete Cache, in which all
caches contain entries for all memory addresses, there is no
separate memory and there are no memory refresh operations.
This proof is a step in proving that Lazy Caching is sequentially
consistent. The choice of the invariant for the safety proof
is the main issue. Other parts of the Lazy Cache proof will appear
here as they become available.
Postscript -
DVI
- The analysis of an operating manual in
Analysis of a Technical Description of the Airbus
A320 Braking System
is also accomplished using TLA Predicate-Action Diagrams.
- The formal analysis method
Why-Because Analysis (WBA) is also
based on TLA and the hierarchical proof style.
Back to Contents
Interest is growing in the telecommunications community in the use of
Message Sequence Charts (MSCs), an ITU-T standard (Z.120) language.
This joint work with Prof. Stefan Leue of the University of Waterloo in
Canada defines a straightforward interpretation of both `basic' MSCs
and `high-level' MSCs into state machines. Various semantic problems with
the MSC standard arise, which can be seen graphically (literally!) in our
interpretation of MSCs into the automated verification tool Promela/XSpin
from Gerard Holzmann's group at Lucent Technology's Bell Laboratories.
-
Implementing and Verifying Message Sequence Chart Specifications
Using Promela/XSpin
Stefan Leue and Peter Ladkin
11 September 1996
- Abstract:
We discuss a translation of Message Sequence Charts (MSCs) into the
language {\em Promela} (we call this translation an
`{\em implementation}') that is consistent with the formal semantics
we have previously defined for Message Flow Graphs and Message Sequence
Charts, which handled the syntactic features with mathematical import
from ITU-T recommendation Z.120.
We report on experiments executing the Promela code using the
{\em Xspin} simulator and validator. In previous work we found that
potential process {\em divergence} and {\em non-local choice}
situations impose problems on implementations of MSCs, and we discuss
how these impact our Promela translation and suggest solutions.
Finally, we show how to model-check liveness requirements
imposed on MSC specifications. We use the Promela models obtained
from our implementation, describe how to use control state
propositions based on these models, use Linear Time Temporal Logic
formulas to specify the liveness properties, and demonstrate the use
of XSpin as a model checker for these properties.
(Zip-ed Postscript, 1.5Kb)
pp. 65-89 in The SPIN Verification System,
ed. Jean-Charles Grégoire, Gerard J. Holzmann and
Doron A. Peled, DIMACS Series, Vol. 32,
American Mathematical Society Press, 1997.
-
Comments on a Proposed Semantics for Basic Message Sequence Charts
Peter Ladkin and Stefan Leue
January 1995
- Abstract:
We comment on a paper by Mauw and Reniers.
(GZip-ed Postscript, 5K)
in The Computer Journal, 37(9), January 1995.
-
Interpreting Message Flow Graphs
Peter Ladkin and Stefan Leue
28 October 1994
- Abstract:
We interpret Message Flow Graphs as a finite state machines, in which
states are the global states of the system. One important application of
MFGs is as Message Sequence Charts, MSCs. Liveness properties of MSCs
are underdefined by the Z.120 standard. We show how to compose MFGs
(insofar as sense can be made of this operation without considerable
qualification) and what it means, and we show how to use temporal logic
to specify liveness properties precisely.
Formal Aspects of Computing 7(5):473-509, 1994
(Zip-ed Postscript, 167K)
-
Four Issues Concerning the Semantics
of Message Flow Graphs
Peter Ladkin and Stefan Leue
1994
- Abstract:
We discuss four issues concerning the semantics of Message Flow
Graphs (MFGs). MFGs are extensively used as pictures of
message-pasing behavior. One type of MFG, Message Sequence Chart
(MSC) is ITU Standard Z.120. We require that a system described
by an MFG have global states with respect to its message-passing
behavior, with transitions between these states effected by
atomic message-passing actions. Under this assumption, we argue
(a) that the collection of global message states defined by an
MFG is finite (whether for synchronous, or partially-asynchronous
message-passing); (b)that the unrestricted use of `conditions'
requires processes to keep control history variables of
potentially unbounded size; (c) that allowing `crossing' messages
of the same type implies certain properties of the environment
that are neither explicit not desirable, and (d) that liveness
properties of MFGs are more easily expressed by temporal logic
formulas over the control states than by Büchi acceptance
conditions over the same set of states.
Appeared in Formal Description Techniques VII,
ed. D. Hogrefe and S. Leue, IFIP Series,
Chapman and Hall, 1995.
(Zip-ed Postscript, 81K)
Back to Contents
-
Statement on Digital Wireless
Technologies Prepared for the EU 6th
Framework Consultation
Jan Hennig
8 April 2004
- Abstract
Digital radio-frequency communication technology (which we refer to below simply as Digital
Wireless or DW) has been available for many years. Recently, miniaturisation and the application
of mass production techniques have led to new uses such as RFID tagging, which interests
the retail industry and distributors. Such non-traditional uses of RF communication brings DW
technology into increasing contact with ordinary citizens. The problems which arise have been
insuficiently analysed. To enable appropriate use of RFID and related DW technology, these
social and ultimately political problems must be specified and analysed more thoroughly.
[PDF, English, 96K]
-
Preserving Privacy in RFID Deployment
Jan Hennig
23 March 2004
- Abstract
Radio Frequency Identification, RFID, is an item-tagging technology which
interests suppliers and retailers. RFID has potential to revolutionise supply
chain management but also some potentially distressing social implications.
When used inappropriately, RFID is capable of reducing or eliminating customer
anonymity, thereby damaging privacy and threatening civil liberties. I begin
with an introduction to the technological background. Second, I explain pros
and cons by means of some example scenarios and, third, outline how this
technology may be introduced in a way that can preserve privacy.
[ PostScript Version (119K) |
PDF Version (80K) ]
-
Talking to Newspapers: A Cautionary Tale with Moral
Peter B. Ladkin
16 July 1999
- Abstract
On 24th and 25th June, 1999, I was interviewed by the Sunday Times
for an article which they published on 27th June, 1999. The published
article significantly misrepresented the information I had shared. The
misrepresentation was picked up and furthered by newspapers in Germany,
Austria, Australia and perhaps other countries. Here is the story, and the
truth, along with a moral.
[HTML, English, 26K]
-
Analysis of Data Discontinuities
Michael Blume, Dominic Epsom, Heiko Holtkamp, Peter B. Ladkin,
I Made Wiryana
25 January 1999
- Abstract
Many computer scientists believe that the so-called Year 2000
problems are technically trivial. We disagree. We analyse data
representation in computers and devise verification criteria for
satisfactory representation. We illustrate what can go wrong if
these criteria are not met, by means exclusively of Year-2000-related
examples (maybe the most significant aspect of the series of
anticipated problems is that the coverage is so broad!). The paper
is written without assuming any specific mathematical knowledge
on the reader's part, although familiarity with some ideas in
modern math will help some, we expect.
[HTML, English, 94K]
-
The Year 2000 Problem
Heiko Holtkamp, Peter B. Ladkin
(Deutsche Übers. Michael Blume)
30 October 1998
- Abstract
We describe informally a series of data representation problems
known variously as "The Y2K Problem", The Year 2000
Problem", or "The Millenium Bug" (all singular).
There is in fact a series of such problems surrounding the
millenium change. We describe and categorise them here.
[HTML, 24K ]
- Iterative Decomposition of a Communication-Bus System using Ontological Analysis
Jörn Stuphorn
July 2005
- Abstract:
The Ontological Analysis consists of two phases: the phase of iterative
decomposition and the phase of hazard and risk analysis. The aim of the iterative
decomposition is to describe the elements and their interactions increasingly
detailed, whereas the aim of the hazard and risk analysis is the evaluation of the
risk imposed by the hazards identified for the described system.
This work concentrates on the description of the methods used for the ontological
analysis and demonstrating the process of iterative decomposition on the example
of a communication bus system.
The subject analysed is a communication system able to transmit time- as well
as event-triggered messages. If safety-critical devices like the steering or braking
systems are to be connected without mechanical backup, a high level of confidence
in the system used for connection of the devices has to be achieved. In
newer aircraft the steering is done by using Fly-by-Wire systems that digitise
the pilot's control input and transmit this information via controlling systems to
the control surfaces. The automotive industry tries to adapt this approach with
new control systems for vehicles, that are to be used in X-by-Wire or Powertrain
communication solutions for the breaking- and steering- respectively the motormanagement-system.
As the communication solution applied in aircraft is very
expensive, this solution will most certainly not be adopted in automobiles, where
the count of units produced is very high and cheaper solutions are sought.
Existing communication systems in the automotive industry use event-triggered
communication protocols like CAN or LIN for the transmission
of information between devices. As possible failures in the communication of
the breaking system can have grave outcomes, the acceptance of the costumers
and the popularity of a brand is threatened. Communication systems for safetycritical
tasks must guarantee, that the information is transmitted within defined
timing requirements. The differences between time- and event-triggered protocols
is obvious, if the time needed for a message being transmitted from a sender to
a receiver, the latency, is observed. For event-triggered communication protocols
it is only possible to achieve a required latency with a certain likelihood. In
contrast time-triggered communication systems can ascertain the compliance with
required communication attributes but tend to show ine±cient use of the available
network resources.
A communication system combining the ascertained compliance with required
communication attributes and the possibility to integrate existing communication
solutions is very interesting. Such a communication system enables the implementation
of new control technology while supporting an easy way of migration
for existing devices.
This work is divided into five parts. In the first part, the needs of communication
protocols in the automotive domain are investigated. The fundamental design
options for networks and proposed classifications are described. The second part
describes the methods used in the ontological analysis' phase of iterative decomposition
while the third part focuses on the methods used in the phase of risk
assessment. The fourth part demonstrates the iterative decomposition's application
on the communication system whose needs were identified in the first part.
Finally the ¯fth part consists of the conclusions this investigation lead to and the
outlook.
[ PDF 1.88M |
PDF 1.43M, gzipped ]
- Formal Task Analysis of Graphical System Engineering Software Use
Thilo Paul-Stüve
10 March 2005
- Abstract:
Why-Because Analysis is a failure analysis method designed for complex
systems, such as airliners or cars. A graphical software tool called
YB-Edit supports the analysis in applying the Why-Because Graph method,
which is an essential part of Why-Because Analysis. I determine the
tasks that specify Why-Because Analysis using the Hierarchical Task
Analysis method to identify the field of application of YB-Edit. Using
GOMSL, I examine the use of YB-Edit to accomplish specific tasks during
a Why-Because Analysis. The results of these analyses form the basis
for a new interface design optimised for usability. Its use for
achieving the very same tasks as before is analysed with GOMSL. I
compare the results of the analyses of the use of YB-Edit and the use
of the new interface design. Finally, I discuss the improvements in
usability achieved by using the new interface design.
[ PDF 4.89M ]
- Visualisation Concepts and Improved Software Tools for Causal System Analysis
Bernd Sieker
27 February 2004
- Abstract:
The Why-Because Analysis is a well-known and widely-used method
for examining causal factors of incidents and accidents. It has
traditionally been used by creating textual representations of the
causal relationsships of events, processes and states that led to
the incident and having the graph drawn by a simple command-line
tool. The creation of the graphical tool CiEdit was a major step
forwards in ease-of-use. I am going to point out some weak points
of the existing tools and propose remedies along with a prototype
implementation of some of them, in particular a way of splitting
a large graph into a skeleton graph and several subgraphs. In the
second part I will present a Why-Because Analysis of a high-profile
accident, the crash of the supersonic airliner Concorde in July
2000 near Paris. I will demonstrate how the proposed improvements
to the Why-Because-Tools will help gaining a better understanding
of the events leading to the disaster. As a side note I will look
into some popular myths about the Concorde accident that have been
circulated in the press, and refute them in detail by looking at
the actual findings of the investigators.
[ PDF 2.94M ]
- Spezifikation und Implementation eines sicheren Lernerfolgskontrollmoduls für CSCL - Werkzeuge
Andre Döring
15 October 2003
- Zusammenfassung:
Ziel der Arbeit ist die Implementation und Integration eines Softwaremoduls
zur Lernerfolgskontrolle in ein existierendes eLearning-Werkzeug. Für
dieses Modul habe ich die Form des Multiple-Choice-Tests gewählt. Die
Entwicklung eines solchen Moduls wirft jedoch im Vorfeld einige Fragen auf.
Zunächst ist zu klären, ob und wie ein solcher Multiple-Choice-Test aus
didaktischer Sicht in ein eLearning-System integrierbar ist. Ebenso muss
betrachtet werden, welche Anforderungen potentielle Benutzer von
eLearning-Systemen an ein solches Modul stellen.
eLearning-Systeme sind netzwerkbasierte Softwaresysteme, und daher ist aus
meiner Sicht zusätzlich zur Anforderungsanalyse zu überprüfen, welche
Sicherheitsmechanismen in mein Modul integriert werden müssen, damit es
innerhalb eines eLearning-Werkzeugs nicht zum Sicherheitsrisiko wird.
Da in der Literatur keine Analysen über Sicherheitprobleme von
eLearning-Systemen zu finden waren, nehme ich dieses zum Anlass vorab eine
generelle Analyse zu Sicherheitproblemen von eLearning-Systemen
durchzuführen. Die Ergebnisse dieser Sicherheitsanalyse nutze ich dann als
Basis für die spezielle Analyse der Sicherheitsprobleme meines
Softwaretools.
Die Arbeit gliedert sich in zwei Teilbereiche. In Teil I. stelle ich
zunächst das in der Arbeit betrachtete eLearning-Konzept vor: das
kooperative computergestützte Lernen (CSCL). Dann werde ich untersuchen,
in welcher Form Multiple-Choice-Tests didaktisch sinnvoll in CSCL-Systeme
zur Lern- bzw. Leistungskontrolle integriert werden können. Als letztes
betrachte ich, welche Sicherheitsprobleme für CSCL-Systeme existieren
und welche Sicherheitsmechanismen zum eindämmen dieser Probleme notwendig
sind.
Teil II. setzt die im ersten Teil gewonnenen Erkenntnisse in die Praxis um.
Beschrieben wird die Spezifikation des Multiple-Choice-Test-Moduls zur
Lernerfolgskontrolle innerhalb des CSCL-Systems. Diese Spezifikation wird
ebenfalls einer Sicherheitsanalyse unterzogen. Diese basiert auf den
Ergebnissen der allgemeinen Sicherheitsanalyse für CSCL-Systeme und der
Spezifikation des Moduls. Die Implementation vereint die Analyseergebnisse
und die Spezifikation des Moduls zu einem Softwaresystem.
Somit ist das Ziel der Arbeit erreicht: Die Spezifikation und Entwicklung
eines sicheren Lernerfolgskontrollmoduls f¨ur CSCL - Werkzeuge.
[ PDF 807K ]
- Konzeption eines verteilten Datenarchivierungssystems
Jan E. Hennig
5 September 2003
- Zusammenfassung:
Als Ergebnis eines Vergleichs aktueller Archivierungssysteme ergibt sich,
dass keines den Spagat zwischen Spezialisierung bei gleichzeitiger
Beibehaltung der allgemeinen Anwendbarkeit beherrscht. Die wünschenswerten
Funktionalitäten aus den aktuellen Systemen werden im Rahmen dieser Arbeit
als Anforderungen formal spezifiziert. Daraufhin wird ein diese Anforderungen
erfüllendes Programm konzipiert und die daraus in der Arbeit entwickelte
prototypische Implementation "VDAS - Verteiltes Datenarchivierungssystem"
vorgestellt. Das Programm ist als verteiltes System ausgelegt.
Im Zuge der Konzeption werden verschiedene Probleme und deren Lösungen
beschrieben. Unter anderem wird ein Konzept für mit persistentem Speicher
abgesicherte Warteschlangen zur garantierten Zustellung von Nachrichten in
einem unsicheren Netzwerk und eine darauf basierende Erweiterung des
Zwei-Phasen-Commit-Protokolls als Robustes Commit-Protokoll zur Kontrolle
von verteilten ACID-Transaktionen, die ohne Protokollmehraufwand auskommt,
vorgestellt.
Das entstandene Programm ist in der Lage, die Archivierungsaufgabe auch bei
Ausfall eines oder mehrerer Systemteile an fernen Standorten durchzuführen.
Den Benutzern wird die Arbeit durch Hilfsmittel wie Merkmalsextraktion und
Formatübersetzung erleichtert. Diese Funktionen lassen sich nachträglich
durch das Zufügen neuer Plugins erweitern und so geänderten
Betriebsanforderungen anpassen. Außerdem existiert eine umfangreiche
gruppenbasierte Rechteverwaltung, die von der Administration bedient werden
kann und dafür sorgt, dass Benutzer des Systems nur die Daten eingeben,
finden und ausliefern können, bei denen es ihnen erlaubt ist. Die Daten
selbst lassen sich gruppieren und in Kategorien einordnen. Es lassen sich
auch frei definierte Metadaten den Daten zuordnen.
[ PDF 6.43M |
PDF 902K, gzipped |
PS 12.55M |
PS 784K, gzipped ]
-
Komplexitätsbetrachtung einer
Softwareumstellung an Beispielen von SAP R/2
Olaf Kerger
9 October 1998
- Abstract
McHa is a software complexity measure developed for SAP/R2 programs.
It was developed as a measure approximating Halstead's `Software Science
measure but much easier to evaluate. It is based on modifying McCabe's
cyclomatic complexity measure with a correction factor obtained by evaluating
a small subset of Halstead's parameters.
McHa has been successfully used to assess the complexity of R2 programs which
would not be practical to assess using Halstead's raw technique.
[HTML intro page ]
-
Vergleichende Analyse von elektronischen Geldtransfer-Systemen
Andreas Kaiser
5 August 1998
- Abstract
Formal Methods were used to investigate whether electronic retail
payment systems (`Electronic Cash') fulfil the expectations of such
systems from the user's point of view. The major question discussed
is whether such systems fulfil the requirements and constraints of
hard cash. The payment behavior of various electronic cash systems is
described and compared with the help of the Temporal Logic of Actions.
The manuscript is written in German.
[Postscript German, 260K, gzipped ]
Back to Contents
I have had ongoing interest for a decade in temporal reasoning using intervals,
and in constraint satisfaction problems in general. Here are the results.
-
A Note on a Note on a Lemma of Ladkin
Peter Ladkin
13 September 1996, revised 14 October 1996
- Abstract Antony Galton published a recent paper correcting
a Lemma of mine - well, sort of, because the Lemma was not false.
However, it was useless as stated. In
this note,
I put the record straight(er),
and offer a few comments on the social background.
-
Simple Reasoning With Time-Dependent Propositions
Maroua Bouzid and Peter Ladkin
4 November 1995
- Simple practical reasoning with propositions whose truth
values depend on time is a matter of logical engineering.
Here's one approach, along with some algorithms for implementing it.
We also consider reified and non-reified logics, and show that
a reified logic is more appropriate than its non-reified equivalent,
when time references are interpreted as union-of-convex intervals.
To appear in the Journal of the IGPL.
Postscript -
DVI
-
Fast Algebraic Methods for Interval Constraint Problems
Peter Ladkin and Alexander Reinefeld
21 August 1996
- We implement a constraint satisfaction technique for qualititive relations
on intervals over dense linear orderings (sometimes called
"time intervals"), and show how to solve them fast, where the
bottlenecks are, and what the problem space looks like.
This is an Invited Paper for the
Annals of Mathematics and Artificial Intelligence,
based on a shorter paper in Springer LNCS 737,
Artificial Intelligence and Symbolic Mathematical
Computing, ed. Calmet and Campbell, 1993, with
some new results on speed-up of the composition operation.
Postscript -
DVI -
LaTeX (needs
prepictex.tex,
pictex.tex, and
postpictex.tex)
-
On Binary Constraint Problems
Peter Ladkin and Roger Maddux
25 January 1995
- The concepts of binary constraint satisfaction problems can be
naturally generalized to the relation algebras of Tarski. We do that.
Among other things, we give an example of a $4$-by-$4$ matrix of
infinite relations on which no iterative local path-consistency
algorithm terminates; and we give a class of examples over a fixed
finite algebra on which all iterative local algorithms, whether
parallel or sequential, must take quadratic time. Specific relation
algebras arising from interval constraint problems are also studied:
the Interval Algebra, the Point Algebra, and the Containment Algebra.
Appeared in the Journal of the ACM 41(3), May 1994, 435-469.
Postscript -
DVI -
LaTeX (needs
prepictex.tex,
pictex.tex, and
postpictex.tex)
-
An Algebraic Approach to General Boolean Constraint Problems
Hans-Werner Güsgen and Peter Ladkin
23 April 1995
- The work of Ladkin and Maddux concerned binary CSPs.
We show in this paper how to treat general CSPs also by algebraic
logic, this time using Tarski's cylindric algebra.
We formulate all the usual concepts of CSPs in this
framework, including k-consistency, derived constraints,
and backtrack-freeness, give an algorithm scheme for
k-consistency, and prove its correctness.
Postscript -
DVI
-
Integrating Metric and Qualitative Temporal Reasoning
Henry Kautz and Peter Ladkin
July 1991
- We show how metric and Allen-style constraint
networks can be integrated in a constraint-based reasoning
system. We use a simple but powerful logical language for
expressing both quantitative and qualitative information;
develop translation algorithms between the metric and Allen
sublanguages that entail minimal loss of information; and
explain a constraint-propagation procedure for solving problems
expressed in a combination of metric and Allen constraints.
Appeared in the Proceedings of AAAI-91 (Anaheim, CA)
MIT/AAAI Press, July 1991.
Postscript -
DVI
My Thesis
-
The Logic of Time Representation
Peter Ladkin
November 1987
-
Every so often, I get requests for my thesis or for some Kestrel
Institute Technical Reports on work reported in my thesis, so
here it is. It includes work published in papers in AAAI-86,
AAAI-87, IJCAI-87, ICSE-87 and AAAI-88
(a longer version with proofs), plus a little more.
It's about 130 pages long.
Postscript -
DVI
Back to Contents
Pilot-controller communication is one of the most formal natural languages,
designed for unambiguous and clear communication of aviation control tasks and
intentions. Nevertheless, communication may go awry, as in the unfortuntate
American Airlines accident of 20 December 1995 on approach to Cali, Colombia.
This project, joint with Prof. Dafydd Gibbon of the University of Bielefeld
Faculty of Linguistics and Literature, aims to analyse formally aspects of the
language and language use, with the goal of contributing both to aviation safety
and to formal linguistics. This project is in very early stages.
One result so far is
Comments on Confusing Conversation at Cali
Back to Contents
Some essays on topical themes, and multiparty discussions,
are collected here.
-
University Education in the US, UK and Germany:
A Quick Comparison
Peter Ladkin
11 December 1997
- Abstract:
A point-by-point comparison (somewhat superficial) of university teaching,
learning, and assessment in these three countries.
This précis was written as notes for
an extra-curricular lecture to Bielefeld students on December 16 1997
during the demonstration concerning university reform.
(HTML)
-
Ziele zur Hochschulreform
Dirk Stössel u.a.
2 December 1997
- Abstract:
A discussion paper consisting of a list of major points, reasons, and
counter-reasons, on the topic of German University Teaching Reform.
(German, HTML)
-
On Keeping Your Mouth Shut: Discussing Aircraft Accidents in
Public
Dick Mills, Robert Dorsett and Peter Ladkin
9-12 September 1996, revised 22 October
- Abstract:
Dick Mills proposed in
RISKS-18.42
that speculation and discussion
of aircraft accidents in public as they happened had negative
consequences, including continuing distress to sufferers, and
suggested it should be voluntarily limited to the time after the
accident report is published. Robert Dorsett cited some examples
of timely discussion which, although not always accurate, had
beneficial effects, and Peter Ladkin gave six reasons why he
thought Mill's proposed restiction would not be beneficial.
Start here.
-
Future University Computing Resources
Peter Ladkin
7 November 1995
- Abstract:
I consider what kinds of computing resources a University will
be providing to students and the community. I hope that some of
the ideas will have wider application than just to Universities.
This is an HTML script. I'd be very
glad to receive your comments.
-
A Debate on Social Issues with Use of the Internet
Harold Thimbleby, Peter Ladkin, Brian Randell and others
23 September 1995
- Abstract:
Harold Thimbleby gave a talk to the British Association for the
Advancement of Science 1995 Annual Meeting on what he regards as
some current unhealthy developments on the Internet. The talk
got extensive coverage in the British press. Brian Randell
strenuously disagreed at the talk with both Harold's presentation
and his views. Brian's views did not get similar media coverage.
But here on the Web, all sides get debated.
Start here.
Back to Contents