This is an archived page and is no longer updated.
Please visit our current pages at https://rvs-bi.de

Reasons and Causes

Peter Ladkin

Research Report RVS-RR-96-09

The language of causation is ubiquitous in science and engineering, despite David Hume's suggestion that there isn't any such thing (1). I want to consider specific events that actually happened. I'd like to know how to describe causal connections between them, and why and when to describe them. There seem to be few puzzles in wondering what one means in saying that the pilot's deliberate shove caused the throttle levers to move forward. He pushed them firmly, because of that they moved, and there was nothing untoward which prevented them moving (his lunch didn't fall in the groove and block the levers). The first is the `cause', the second the `effect', and the third the ceteris paribus conditions. What about causes further back, why did he push? Because he decided to, and then he did it. But he could have decided to push the levers forward -- and then simply not done so. His intention was a reason why he pushed, naively one might even be tempted to say `cause', but it doesn't seem to be a physical cause (unless one believes in very strict psychological determinism) because it needn't have resulted in the actual push.

I want to explain accidents, and explaining part of them requires giving causes. Human decisions and actions are often part of the explanation, and as we have seen these are not necessarily, or even usually, causal in the way we expect physical events to be. Nevertheless, accident reports sometimes speak of pilots' decisions as `causes' (as in the report on the Lufthansa A320 accident in Warsaw -- see (2)).

Accident analyses talk about causes in order to figure out how to avoid them in the future. According to Robert Sweginnis (of Embry-Riddle Aeronautical University and a former USAF Aircraft Accident Investigation Instructor),

Air Force Instruction 91-204, Safety Investigations and Reports, defines a cause as "... an act, omission, condition or circumstance which either starts or sustains a mishap sequence. It may be an element of human or mechanical performance. A given act, omission, condition, or circumstance is a "cause" if correcting, eliminating, or avoiding it would prevent the mishap or mitigate damages or injuries."

Aviation accident reports determine `causes' and `contributory factors' of accidents. Such reports are amongst the most careful and most detailed of all accounts of accidents. However, they use no formal notion of cause and no formal reasoning procedures. Sweginnis puts his finger on some of the problems this causes (so to speak):

The International Society for Air Safety Investigators (among others) has been arguing the issue of "What is a cause?" and "Is identifying causes necessary or even counter productive to safety in general and aviation specifically?" There is a large contingent that would like to "outlaw" the use of the word "cause." I personally like the use of cause when used in the context of "root cause" but not in the context of a "proximate cause." The root cause focuses on systemic problems which need to be fixed. Proximate causes tend to focus on the events closest to the accident, and are often looking to place blame (read liability) rather than fixing the system.
Some formality may help. For a start, does the predicate `causes' have two arguments (a single cause and a single effect), or does it rather have many? The pilot's shove causes the levers to move, but then David Lewis (3) notes there could be rather a lot:
An explanandum event has its causes. These act jointly. We have the icy road, the bald tire, the drunk driver, the blind corner, the approaching car, and more. Together, these cause the crash. Jointly, they suffice to make the crash inevitable, or at least highly probable, or at least much more probable than it would otherwise have been. And the crash depends on each. Without any one it would not have happened, or at least it would have been very much less probable than it was.
We may say that causes are jointly sufficient and severally necessary. This was crudely the view of John Stuart Mill (5) (the `sufficient' bit) and Ernest Nagel (6) (the `necessary' bit). However, classical logical notions don't seem to suffice to enable us straightforwardly to define what causes are -- Ernest Sosa gives some counterexamples in the introduction to (7) . Some more criteria are needed.

Another question: if A causes B and B causes C, does it follow that A causes C? (This is the property of transitivity.) Statements of causality such as A causes B have generality: If A happens, then, ceteris paribus, B happens. More or less every time. But what about explaining a specific historical event: if A caused B and B caused C, did A cause C? This is not general but singular. Is transitivity for either justifiable?

Looking for the `root cause' suggests that the USAF wants causality to be transitive. An event A starts a chain which leads inexorably to an accident through intermediate events. But to be transitive, to make a chain, causality must also be a binary relation, so we had better figure out the answer to the first question first.

An even more basic question: in a statement A causes B, what are A and B? Are they pictures, Platonic forms, propositions, events, sentences, or expectant mothers? If they're sentences or propositions, what are they sentences and propositions about? If they're events, how are these events described? Examples of such sentences: It's 4pm, which will cause it to be 4.01pm in about one minute, or The pilot pushed the throttle levers, which caused them to move; or events: The pilot's shove caused the movement of the throttle levers, or Humpty's fall down caused his crown to break; or even a mixture of event and thing: Humpty's fall down caused the break in his crown.

So, how many arguments does `causes' have; if binary, is it transitive; and of what type of thing are the arguments? Answers to such questions may help to clarify discourse about causes.

Reasoning About Causal Histories

In (2), I suggested laying out all the `main' causal factors of an accident in order to determine the potential causal relations. I suggested a notion of `reason' broadly consistent with observations of Lewis (3) on causal explanations. I suggested discovering the causal factors by asking why?...because questions and giving answers, continuing to ask backwards until one reached a reasonable stopping point. Lewis warns against considering this a complete procedure, because there's simply too much stuff to fill in, but nevertheless one gets the needed answers:

In other cases, it isn't feasible to provide maximal true answers. There's just too much information of the requested sort to know or to tell. Then we do not hope for maximal answers and do not request them, and we always settle for less. The feasible answers do not divide sharply into complete and partial. They're all partial, but some are more partial than others. There's only a fuzzy line between enough and not enough of the requested information. "What's going on here?" - No need to mention that you're digesting your dinner. "Who is Bob Hawke?" - No need to write the definitive biography. Less will be a perfectly good answer. Why-questions, of course, are among the questions that inevitably get partial answers.
Andy Fuller (a highly-reliable systems engineer - parse it whichever way you like - and a RISKS contributor whose comments I discussed in (2)) suggested a finer analysis of one of the assertions in my layout of reasons and causes (the `causal hypergraph') for the X-31 accident:
...you note:
  • The cause of loss of control was pitot icing.
I have to disagree with this statement. Pitot icing may have been a distal cause of the loss of control of the aircraft, but not a proximal cause. I propose the following:
  • The cause of loss of control was loss of accurate airspeed data.
  • The cause of loss of accurate airspeed data was pitot icing.
I believe the distinction to be important in this case. The X-31 exists in a world that is "not near" to the world inhabited by, for instance, the Piper Archer. In the world of the Archer, the pilot of the aircraft can maintain control if airspeed data is lost (indeed, Piper worked very hard to insure that simply releasing the controls would cause the aircraft to recover from many situations). In the world of the X-31, the airspeed data is as essential to flight as the ailerons, elevators, and rudder. Loss of airspeed data is as immediately disastrous as loss of the empennage.
Fuller's reasoning is compelling, and I agree it's an improvement. Comparing similar situations, similar `worlds' (flying `some plane'; resp., flying the X-31), one would indicate in a causal explanation all appropriate differences. Since pitot icing wouldn't cause loss of control in an Archer, the statement pitot icing caused loss of control would fail to explain part of an a Archer accident. Better therefore to factor more finely, into one causal statement true for most airplanes, and one true specifically for the design of the X-31. It reminds one of inheritance reasoning -- but for failures.

Fuller used the notion of `near' worlds -- situations similar to, not sufficiently different from, the actual situation. Nearness is a way of talking about the ceteris paribus conditions. I had observed in (2) that the notion of cause in accident analysis seemed to be related to that of counterfactual conditional. `If this event hadn't occurred, the accident wouldn't have happened'. But it did, and it did. This is a way of explaining the `several necessity' of a causal factor. Also the sufficiency: `If the accident weren't to have happened, these events couldn't have occurred the way they did'. Counterfactuals are modal, that is, they assert what would have happened if things had happened differently from the way they actually did. We're in good company. Hume's Enquiry, Section VII, Part II gives two `definitions' (as quoted by Lewis (8)):

We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words where, if the first object had not been, the second never had existed.
The second criterion is counterfactual. Lewis's influential semantics for counterfactual conditionals is elaborated in (9). He takes the notion of possible alternative world seriously (as does the standard Kripke semantics for modal logics), and explains the meaning of a counterfactual had A happened, then B would have happened by considering whether B occurred in some `nearest' possible world in which A occurred, a world in which A occurred that differ `minimally' from the actual world - from actual history. To clarify the intended meaning of `minimally different', suffice it to say that if I had taken my shoes off outside, my feet would have gotten cold because in the minimally different world I simply took off my shoes, while it remained below freezing and the bus still came at 10.19, two minutes late, for the very same reasons as in actuality. A world in which the bus had come at 10.21 would be more different still, and a world in which the bus came not at all and it was very warm today would be yet more different. Because of the difficulties with explaining the notion of `caused' indicatively, following Hume we could consider counterfactuals, for which at least one plausible semantics encourages us to consider possible alternative worlds. According to this view, A caused B has a modal component to its meaning.

In his comment, Fuller also implicitly denied transitivity, by denying a particular instance of it. If I could show that this notion of `causes' is transitive, he would then be committed to accepting the statement that pitot ice caused loss of control. I can't because it isn't, when interpreted as a counterfactual. Not that that makes him right and me wrong. But why quarrel? His factoring satisfies us both -- showing that the causal hypergraph is a fairly robust method that can accomodate these differences of judgement on particular causal assertions.

What Causes, What Gets Caused, and By How Many?

The pilot's shove caused the throttle's forward movement. What kind of thing is the pilot's shove? It's something that happened, an event. Lewis again:
A causal history is a relational structure. It's relata are events: local matters of particular fact, of the sorts that may cause or be caused. I have in mind events in the most ordinary sense of the word: flashes, battles, conversations, impacts, strolls, deaths, touchdowns, falls, kisses,.... But also I mean to include events in a broader sense: a moving object's continuing to move, the retention of a trace, the presence of copper in a sample.

His idea of event is described in (10). We can denote events by describing what happened in them: the event that S, where S is a sentence. Examples: the event that the pilot pushed the throttle levers; the event that the throttle levers moved. When we do so, we must ensure that the description identifies the event uniquely. Since there are many pilots pushing throttle levers and many throttle levers moving all over the world all the time, we must ensure it's understood which one of these we mean. Once an event is unambiguously denoted by a sentence, we can enrich and complexify the description providing it continues to refer to the same event. The event that the pilot pushed the throttle levers is the exact same event as The event that the pilot pushed the black throttle levers with his left hand, still a little sticky from the ketchup which had squeezed out of his hot chicken sandwich.

The phrase the event that S is P, asserting that the property P holds of this event, has been defined by Russell (11) (12) as there is one and only one thing x such that S, and P(x) holds also. This thing x must be an event. Although Russell's definition has been questioned, others such as Donald Davidson (14) have also held quantification over events to show the logical form of many action statements, including causal ones. We can see from Russell's definition that any S which suffices to pick out x uniquely will do the job of identifying the event we are talking about. If this is so, we don't just have to consider predicates such as `is-P', we can consider more complex ones, which have an object as well as a subject -- such as `caused'. On this reading, the event that S caused the event that T is translated as there is one and only one event x such that S and there is one and only one event y such that T and x caused y.

Note that speaking of the event that S in Russell's translation implies that such an event occurred. Suppose, as a hypothesis, that occurrence of events A and B and C are jointly sufficient and severally necessary for occurrence of D. We want to say therefore (A and B and C) caused D. None of the three events were sufficient by themselves or in pairs to cause D, but all three together they were. Does this mean that we cannot consider caused to be a binary relation, but rather one with an indeterminate number of subjects and one object? Maybe we can consider a super-event (A & B & C) consisting precisely of a joint occurrence of all three individual events. A is, say, the event that X, B, the event that Y, and C, the event that Z. Then (A & B & C) is the event that (X and Y and Z). Is there such a thing? We know that (X and Y and Z) is consistent, since all three events occurred. So something can satisfy all three conditions X, respectively Y, and Z. Could more than one thing satisfy all three? If so, there would be at least two such events x and y, and these events both led causally to D. Suppose x alone had occurred. Then D. Then y could not be necessary for D, contradicting the hypothesis that it's an event whose occurrence was necessary for D. So, under the hypothesis, there's precisely one such event which is the occurrence of the jointly sufficient and severally necessary conditions for D.

A super-event is thus a device allowing events, providing they actually occurred (a necessary hypothesis) to inherit the Boolean structure of propositions. And thereby we may reduce causal assertions about actual events to a purely binary relation: (A & B & C) caused D, with precisely two event arguments, (A & B & C) and D. So, when talking about what happened, we can consider causality to be a binary relation. But how do we enumerate all the jointly sufficient and severally necessary conditions? We make all the basic observations, and then we have to perform lots of conjunctions. How inconvenient.

Maybe we can back off from this strong requirement: we may wish to define the relation pcaused (for partially-caused): A pcaused B if and only if A is one of the jointly sufficient and severally necessary causes of B. The relation pcaused is formally a little easier to use than caused, because it's binary, and doesn't require any Boolean operations on sentences to generate one of its denoting terms from the raw observations. The relation pcaused is used in the causal hypergraph. Using the notation &S to denote the superevent formed from all the events in the set S, we can express the connection between pcaused and caused by &{ A | A pcaused B } caused B. We can also define the relation is part of between events: A is part of B just in case (A & B) is the identical event to B. In other words, (A & B) = B. Further, we may note the equivalence A pcaused C just in case there is an event B such that B caused C and A is part of B. For more on parts in general, see (16).

To emphasise, I've shown this only for events that actually occurred, which are the ones I care about in this essay. And it depends on the particular logical form I've assumed. It may work also for others. For example, with objects other than events: chocolate causes zits, so chocolate and chocolate and chocolate causes zits and zits and zits. Just kidding. It doesn't really, and I'm only concerned about things which really happened.

Worrying About Transitivity

Transitivity seems to hold for why...because... statements. Why A? because B. Why B? Because C. So A, because C.

Counterfactual implication using the Lewis semantics is not transitive. Lewis called the basic counterfactual a counterfactual dependency, and defined causal dependency from that, roughly like this: B causally depends on A if and only if the statement that B happened counterfactually depends on the statement that A happened; and statement C counterfactually depends on statement D if and only if had D happened, C would have with the Lewis semantics for this latter. (Since his counterfactual semantics works for sentences, he has to proceed by correlating sentences with propositions with events). Causal dependence is thus Hume's counterfactual definition, above. He then defines A causes B to be simply that there is a chain of events, each one of which is causally dependent on the previous, from B to A. To put it in formal terminology, causes is the converse of the transitive closure of causally depends. So, for Lewis, causal dependency is not transitive, but causes is.

The intuitive reading of Lewis's definitions is via ceteris paribus conditions. I should give an example, then, of what to be wary of when arguing intuitively. First, a causal dependency. In a `nearest' world there are only mimimal changes, changes that are only required explicitly because C didn't happen. The pilot pushed forward, his hand moved and the throttle levers moved forward. (Notice that the event that the throttle levers moved forward is the same event as that his hand moved and the throttle levers moved forward -- so I have made no refinement to the assertion.) If the throttle levers weren't to have moved forward, it seems more reasonable to suppose it would have been because the pilot didn't push them as to suppose that his sandwich would have fallen off his lap and into the throttle lever slots, jamming the levers. It's not that it couldn't have done -- it's just not the `nearest' situation to the actual one in which he did push, and they did move. Negating ceteris paribus conditions belongs to further worlds, not to nearer -- which is also why they're not explicitly represented in the causal situation.

Consider another example Suppose I am 5 kilos `overweight'. I run a marathon in the heat which reduces my weight, and reducing my weight, ceteris paribus, improves my health. But it's not at all clear that running a marathon in the heat improved my health. The ceteris paribus conditions include that one does not minimise one's weight in a health-damaging way, such as via bulimia, or via severe dehydration such as may be caused by running marathons in the heat. I lost weight because I dehydrated, and that is not in itself healthy. The ceteris paribus conditions are intimately tied up with our intuitions about what constitutes a `nearest' world. Possible conflicts between the ceteris paribus conditions for one statement and a presumed cause in another inhibits reasoning by transitivity.

So, why...because is transitive, causally depends is not, Hume's second definition is not, and Lewis-causes is. Intuitive reasoning with ceteris paribus conditions is similar to causally depends. It looks as though we have to be careful with what we're asserting before we can conclude that a relation holds between events because of transitivity from other such assertions. And this is why I can't give the answer I want to Fuller.

We have collected some answers to the basic questions posed above: arguments to a causal relation are events, occurrences of events may be described by propositions, actual events may be given something of a Boolean structure, which also enables us to say when events are part of other events. Although the relation of causality is prima facie polyadic, given that we know this relation, the Boolean structure enables us formally to define a purely binary relation of causality from it, and also a binary relation of partial-cause which holds between two event arguments. But we still don't know quite what properties it has, such as that of transitivity. However, we can conclude that two relations, why..because.. and Lewis's causally depends cannot be identical, because the first is transitive and the second not. Further, we have noted at the beginning of this essay that the idea of `cause' which flows through human agency may fit the first relation, but not the second.

A Recent Logical Model of Causality

So, now that we understand the formality a little more, and have seen in (2) that the causal hypergraph is a useful structure, it might be worthwhile trying to fill out more of the formal background. In a recent paper, A Model for a Causal Logic for Requirements Engineering, Moffett, Hall, Coombes and McDermid try to do just that (17). They develop a sorted first-order language with causal predicates as primitives, and use this for a partial analysis of the Lufthansa A320 accident in Warsaw on 14 September 1993. They base some of their desiderata for causal relationships on Shoham's short discussion in (18). Shoham compares briefly the approaches of Mackie, Lewis and Suppes, and then formulates his own.

Moffett et al. consider the arguments of causal statements to be events and conditions (which are "a generalised form of state"). This is broadly consistent with the approach I have considered. However, they give no internal structure to events, whereas I have noted limited operations we may need to use, and relations which follow from them.

Without explicit discussion, Moffett et al. take causal statements to be assertions of a binary relation. We have shown how this may be done using a Boolean structure of events. Since these operations aren't available to Moffett et al., maybe they don't wish to make this move. But then they must explain how they may take causation to be binary. That must follow from their axioms for causation.

Their main construction is a mapping from conditions into a time line. The function interval-of has intervals of time as values: interval-of(c) denotes the interval of time over which condition c pertains. Time for them is a strict linear order. (They say that they make `minimal assumptions on time', but I guess this depends what one means by minimal: if one counts the usual axioms, a partial order has one less axiom than a linear order; if one means `as few assumptions as possible to cohere with the real world', I would observe that time-like orderings in physics are partial -- and these are even used for reasoning about distributed systems (19); if one means `pragmatically as few restrictions as possible that enable us to do appropriate reasoning', I would observe that the branching-time logic CTL is widely used in reasoning about distributed systems.)

The denotation of interval-of(c), where c is a condition, is a union-of-convex interval in the sense of (20). A convex interval i is a set containing all points in the linear order that lie between two given points a and b, that is, i = { x | a <= x <= b } . Alternatively, one may eschew topology for algebra: the pair <a,b> can be taken to denote the very same interval and all the relevant operations have counterparts in either domain. The second construction leads to a simpler first-order theory. A union-of-convex interval is a collection of convex intervals that are separated: they don't share endpoints and they don't overlap.

As conditions map into union-of-convex intervals, events map into time points on the same time line. This distinguishes their intended use of the word `event' from the usage of Davidson and Lewis. Philosophically, temporally instantaneous events such as they consider would be contentious. Formally, lots of assumptions have already been made by considering time as a linear order, and in this setup it's useful specifically to mark boundaries of temporally extended things. That's what events do. I'll continue to use the word `event' as before, and call Moffett et al.'s events Mevents.

Since I have suggested a certain structure to events, I should provide corresponding operations in the interval domain. The operation of intersection (21) accomplishes the operation on union-of-convex intervals corresponding to the operation of forming a super-event. An algorithm for calculating intersection was given in (21). Moffett et al. do not consider the structure of events, so they do not provide corresponding operations on union-of-convex intervals. However, since they appear to be committed to structuring events by their assumption that causality is binary, it appears that they need the intersection operation also.

Considering The Basics

Moffett et al. list some broad conditions on causes which guided their search for a formal causal structure. I consider them individually.

Following Shoham, Moffett et al., take causes to be antisymmetric and irreflexive. That is, events may not cause themselves, and if A causes B, it is not the case that B causes A. These are properties of binary relations -- Shoham also takes the relation of causes to be binary. There seems to me to be little need to impose these formal conditions on causality. Shoham provides no argument for it, and neither do Moffett et al. Lewis's causal dependency is reflexive, and it is neutral on antisymmetry (but this might be enforced by considering what `nearest' worlds there can be). If one is really committed to these properties for some reason, then formally one may always obtain an irreflexive, antisymmetric binary relation from a binary relation that isn't. Namely, suppose S is any binary relation. Then S - { <x,x> | x in dom(S) } is irreflexive (this operation is that of `excluding the diagonal' from a set S). To make an antisymmetric relation requires a choice between which of the two assertions to retain, but there is no formal problem in making such a choice -- maybe using the Axiom of Choice for the case of relations of absurdly large size, if one believes in such things.

However, Moffett et al. seem to believe that these properties are essential for their construction. They say: "These properties [of irreflexivity and antisymmetry] fall out naturally from the temporal ordering which is an essential part of the causal relationships which we define ". I fail to see what this can mean. The pilot's hand is on the throttle levers, and he pushes them forwards. The push of his hand caused the throttle levers to move forward. These two events are contemporaneous - they occupy exactly the same time interval. Thus, the temporal relationship between the interval corresponding to the push and the interval corresponding to the movement is that of identity. Hence the relation on intervals cannot be irreflexive. From the relation on conditions which `falls out' of the relation on intervals, a purely formal operation, namely excluding the diagonal, will yield the desired irreflexivity on conditions. However, I fail to see how that is automatically enforced by the relations on intervals, as Moffett et al. suggest, or that it has any explanatory value or formal purpose. A similar argument applies to the assertion that irreflexivity `falls out'.

"Entities participating in the causal relation have a temporal dimension". Thus for them the relation of causes holds between conditions, and one may associate with a specific condition a time over which that condition obtains. If these times are taken to be union-of-convex intervals, as Moffett et al. take them to be, then these proposition-time interval pairs also have a propositional inferential structure, elaborated in (21).

Moffett et al. say that "Causation is context-sensitive, relative to a "causal field", the background against which causation is perceived to exist.". I take the ceteris paribus conditions associated with a causal statements to provide some of the `causal field'. They distinguish this proposition from the proposition that "Causation is non-monotonic: addition of further conditions may falsify causal relationships.". The ceteris paribus conditions also provide non-monotonicity (the Frame Problem in formal reasoning in AI is exactly that of determining the ceteris paribus conditions). I consider these two propositions to be two aspects relating to ceteris paribus conditions.

This brings me to what I would consider a major weakness -- Moffett et al. offer no way of formulating the modal structure, such as the counterfactual interpretation of `causes', Hume's and Lewis's causal dependency. They have chosen to stay firmly within the reductionist tradition, in which causality is explained by other relations on observable events (such as the `constant conjunction' of Hume). Moffett et al. map conditions onto time intervals on which they obtain. A given such map says which conditions have occurred when. There is no set of alternative time lines to which it may refer, thus no way of expressing how things might have been, counterfactual assertions. The most that can be said is that two conditions are temporally related. They may be able to capture the causal relations on conditions from the relations on intervals, by taking the inverse images of (`pulling back') the relations on intervals to the conditions: we may say that John is taller than Sally by `pulling back' the relation of < on measured height to the persons whose heights they are. But we have already seen that two conditions in the causes relation can map to an identical time interval, that the causes relation is nevertheless required to be irreflexive, and that this entails that the relation of causes may not be obtained by pulling back. However, the only axioms offered for assertions of causality are implications determining how the intervals of the two arguments are related. There are no axioms which enable us to reason with any of the causal relations themselves. We could understand them to want irreflexivity and antisymmetry as axioms, since they intend both properties of causality. They also mention transitivity favorably, so maybe that also. More on this below. So Moffett et al. are making a strong theoretical judgement that they can express everything useful about causality for accident analysis by reducing causality to temporal relations. However, we have already observed how the temporal relations themselves do not suffice to give them all the properties they desire for causality on conditions. So, although they make this strong theoretical commitment, they don't seem to have succeeded in fully justifying its efficacy.

" Causation is not material implication. "A implies B" does not entail "A causes B" ". Well, maybe, but this statement is highly misleading. There's a type confusion (in the sense of Gilbert Ryle). Material implication holds (or not) between propositions, thus in A implies B, A and B denote propositions. But if A and B denote propositions, it makes no sense to say that A `causes' B: propositions cannot `cause' other propositions - causality is a relation between events (in the Lewis-Davidson sense), not propositions. Propositions and events are different types of things, although events may be described by propositions, as we saw above. Conversely, it makes sense to say the event of the pilot's pushing his hand forward caused the forward movement of the throttle levers but not the event of the pilot's pushing his hand forward implies the forward movement of the throttle levers, because both the two arguments to `causes' are noun phrases denoting events, but the arguments to `implies' must be propositions. However, we can set the types correctly by considering the relation the event that A causes the event that B that holds between descriptions of events, i.e. propositions. It is consistent with Moffett et al.'s semantics that descriptions of events are propositions that are true over certain periods of time and are false at other times. If one takes all propositions to hold over periods of time (as in (21)), then the assertion that causality is not the material conditional is an assertion that: it is not the case that for all A and B, the event that A causes the event that B if and only if there is no period over which B is false while A is true. But there lacks an argument why this should not be, contingently, the case in the single time line considered. Why could the actual causality relations amongst event descriptions that pertain in the world not be contingently extensional with the material conditional? In fact, none of their axioms rule it out, as we shall see.

Axiomatising Causality

Moffett et al. consider four basic binary relations on conditions, direct_causes, sustains, and their `opposites', prevents and terminates. They add a fifth, leads_to, which is intended to be the transitive closure of direct_causes. Sustains is intended to hold between two conditions which have temporal extent: riding my bicycle fast sustains my hair (such as I have) flying in the wind. Let me define not-B to be the condition which maps to the interval which is precisely the complement of the interval that B maps to (see (21)). If I understand the intuition behind the other relations, I may define them as follows: A terminates B if and only if A causes not-B and A prevents B if and only if A sustains not-B. Riding my bicycle fast prevents my hair from settling down; and just as hitting the brick direct_causes me to go flying over the handlebars, it thereby terminates my sitting comfortably in the saddle without a care in the world. And thence to the axioms.

Mevents, conditions and times form distinct sorts, times with the structure of a strict linear order. Mevents are associated with a set of time points, each representing an occurrence of the mevent, and conditions are associated with union-of-convex intervals. So, a mevent has no extension in time, but occurs at a time point. Various functions are defined, such as those from a condition to the mevents which represent the start and end points of intervals associated with the condition. Axioms for the interdefinition of the causal relations on conditions are given: the recursive definition of leads_to in terms of itself and direct_causes; various auxiliary relations Sustains and Prevents in terms of direct_causes and sustains, prevents and terminates. Various functions on points and intervals are defined that allow relations and operations on the time intervals that are in the range of interval-of.

And hence we arrive at a weakness. The axioms for the causal relations themselves are all of the following form:

(A atomic-causal-relation B) => temporal-formula
(with one exception in which a conjunction of two atomic causal relations appears as the antecedent). The temporal formula is an expression that appears always to be definable in the language of (22). It should be clear from the form of the axioms that they do not axiomatise properties of the atomic causal relations themselves. They enable us to infer that if a causal relation holds, then a certain temporal relation holds on the associated intervals. This will enable us to infer contrapositively that if a certain temporal relation does not hold, then a certain atomic causal relation also does not hold. (A implies B is logically equivalent to not-B implies not-A.) Even if we accept that the given temporal relations always hold when the given causal relations hold amongst the conditions (and I am by no means sure that I do), it seems that with these axioms we can only infer information about causality from information about temporal relations, and that the causal information we can infer is has the form of negative atomic causal assertions only. This therefore counts as a method for inferring negative causal information only, and cannot count as a general method for reasoning about causality. For example, it cannot yield any of the positive assertions in the causal hypergraph in (2). (I call an assertion of the form (A atomic-causal-relation B) a positive assertion, and one of the form not-(A atomic-causal-relation B) a negative assertion.)

The only way I can see these being used is the following. One has an actual causal history, in which it is known when certain conditions and events pertained. These facts will enable us, via the axioms, to infer that certain causal relations did not hold amongst the conditions. If these relations should have held, then this will alert us to look for failure of the ceteris paribus conditions. But in most accident scenarios it seems that we know that certain ceteris paribus conditions failed anyway. So I'm sceptical that this axiomatisation can help in ascertaining what might have gone wrong.

Moffett et al.'s analysis of the Lufthansa A320 Warsaw accident bears out this prognosis. They consider the fragment of the causal hypergraph which pertains to the operation of the braking system (left unrefined in (2)). They make causal assumptions, give some physical definitions, also regarded as assumptions, and state goals to be proved. Unfortunately, the goals are all positive causal formulae. They cannot prove them, but simply for logical reasons: they may only prove (non-definitional) negative causal formulae from positive causal assertions. Their only means of proving statements from positive causal formulae is by chaining formulae of the form

(A atomic-causal-relation B) => temporal-formula
together with formulae of the form
temporal-formula => not-(A atomic-causal-relation B)
so from the form of this axiom system, one can only hope to prove causal relations of the form
(A atomic-causal-relation B) => not-(A atomic-causal-relation B)
and one thus cannot hope to prove any of their desired goals, none of which have the form not-(A atomic-causal-relation B). It is hard to see how such weak inferences can give us much information about the system under analysis.

Sketching A Method

One may fix the theory of Moffett et al. by including some more properties of causality. But one would thereby obtain a theory still without means of expressing the modality in assertions of causality. More is needed.

A formal method of determining caused statements would help us draw the causal hypergraph and thereby help in accident analyses. Given suitable axioms, one might consider an analysis method such as follows. Let a theory be a collection of assumptions (sentences of which we will consider the deductive consequences). Let a situation be a collection of specific mappings from conditions onto time intervals and mevents onto times (a single Moffett-model). First, many or most of the `pcaused' statements will be obvious (see (24) for the view that we can directly perceive causal relations between certain events). Second, suppose we are uncertain about a particular instance of a causal relation, A pcaused B. We focus on the (let us say) single occurrences of conditions A and B that are important for the analysis. We leave fixed the situation earlier than this occurrence of B, we leave in also the occurrence of A, and we infer from the theory the logical consequences of omitting this occurrence of B. When we infer a contradiction, we omit the occurrence of A and all conditions that pcause A and recursively backwards, yielding a second Moffett-model. When we cannot infer a further contradiction in this second Moffett-model, we conclude tentatively that A pcaused B. Repeating this process, we may eventually infer &{ A | A pcaused B } caused B. This is a mixed syntactic-semantic method (some looking at the Moffett-model, some inference) of cashing out what is meant by the definition of A caused B that I have given earlier. It has weaknesses. First, it might be horrendously complicated, although I imagine that in actual use its complexity would not be too great. Second, it depends crucially on inferences not being drawn, and on doing `as much as one can', and thus is potentially incomplete unless one has a decision procedure for the theory which will allow non-inferences to be concluded as well as inferences. Such a decision procedure might exist for such Moffett-models, depending on how strong the theory is (weaker theories can be more likely to have decision procedures, but then the things that one can conclude are limited). The advantage is that such a method is relatively rigorous, and tests our intuition against formal criteria. It uses the sorts and the intuition behind Moffett et al.'s approach to give a rigorous meaning to the idea of `nearest world' and thus to the semantics of caused.

I suspect that providing a method of determining the relation caused would go a long way towards putting the analysis of accidents on the rigorous foundation that Moffett et al. were hoping for with their approach. The potential advantages, as they had seen, are great. Rigor and formality lead to algorithms and thereby to greater analysis capability and to greater knowledge. The pity is that it's somewhat more complicated than we'd hoped.

Peter Ladkin


Back to top

(1): David Hume, Treatise of Human Nature, Book I, part 3, Sections 1-6, 11, 12, 14, 15, London, 1739; and Enquiry Concerning Human Understanding, Sections 4 and 7, London, 1748. Back
(2): Peter Ladkin The X-31 and A320 Warsaw Crashes: Whodunnit?, in http://www.rvs.uni-bielefeld.de/~ladkin/ Back
(3): David Lewis, Causal Explanation, in Philosophical Papers, ii, Oxford University Press, 1986, 214-240. Also in (4), 182-206. Back
(4): David-Hillel Ruben, ed., Explanation, Oxford Readings in Philosophy Series, Oxford University Press, 1993.
(5): John Stuart Mill, A System of Logic, book iii ch 5, London 1879. Back
(6): Ernest Nagel, The Structure of Science, New York, 1961. Back
(7): Ernest Sosa and Michael Tooley, eds., Causation, Oxford Readings in Philosophy Series, Oxford University Press, 1993. Back
(8): David Lewis, Causation, Journal of Philosophy 70, 1973, 556-567. Also in (7), 193-204. Back
(9): David Lewis, Counterfactuals, Basil Blackwell, 1973. Back
(10): David Lewis, Events, in Philosophical Papers, ii, Oxford University Press, 1986, 241-269. Back
(11): Bertrand Russell, On Denoting, Mind 14, 1905. Back
(12): Bertrand Russell, Introduction to Mathematical Philosophy, Chapter XVI, London, Allen and Unwin, 1919. A relevant excerpt is pp46-55 of (13). Back
(13): A. W. Moore, ed., Meaning and Reference, Oxford University Press, 1993.
(14): Donald Davidson, Causal Relations, Journal of Philosophy 64, 1967, 691-703. Also in (15) and (7). Back
(15): Donald Davidson, Essays on Actions and Events, Oxford University Press, 1980.
(16): Peter Simons, Parts : A Study in Ontology, Oxford University Press, 1987. Back
(17): Jonathan Moffett, Jon Hall, Andrew Coombes and John McDermid, A Model for a Causal Logic for Requirements Engineering, Journal of Requirements Engineering 1(1):27-46, March 1996. Also in ftp://ftp.cs.york.ac.uk/hise_reports/req_capture/causal.ps.Z. Back
(18): Yoav Shoham, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence, MIT Press/Bradford Books, 1987. Back
(19): Leslie Lamport, The Mutual Exclusion Problem: Part I - A Theory of Interprocess Communication, Journal of the ACM 33, April 1986, 313-326. Back
(20): Peter Ladkin, Primitives and Units for Time Specification, Proceedings of AAAI-86, AAAI Press, 1986. Also a chapter of (23). Back
(21): Maroua Bouzid and Peter Ladkin, Simple Reasoning with Time-Dependent Propositions, Journal of the IGPL to appear, 1996. Back
(22): Peter Ladkin, Time Representation: A Taxonomy of Interval Relations, Proceedings of AAAI-86, AAAI Press, 1986. Also a chapter of (23). Back
(23): Peter Ladkin, The Logic of Time Representation, Ph.D. Thesis, University of California, Berkeley, 1987. Also in http://www.rvs.uni-bielefeld.de/~ladkin/
(24): G. E. M. Anscombe, Causality and Determination, Cambridge University Press, 1971. A relevant excerpt is Chapter V of (7). Back