Networks and distributed systems
April 1995, revised HTML version 23 February 1997
(Bit-1)
The example is simple and clear. Its purpose is to provide a counterexample to claims of Fetzer and others, and to ground the arguments which follow.
An implementation of S:
Here is the theorem stating the correctness of this implementation:
Theorem: S implements the specification.
Proof: We need to show that all possible behaviours of S satisfy the specification. Let b denote an arbitrary system behaviour (Footnote 2). By definition, b satisfies the specification QED.
Those who prefer formality may consider the specification
in which P is any unary predicate symbol that may be interpreted as any unary predicate. We leave the proof of correctness in a classical or constructive logic to the reader. Let Pi be this specification: formally, define Pi to be the predicate ~(P(-) & ~P(-)), where `-' is to be replaced by the same denoting term in both occurrences.
The theorem above provides a counterexample to Fetzer's conclusion, thus relieving us from having to examine his argument in detail at this point. See (Lad9x) for a review of Fetzer's papers.
(Bit0)
The theorem in An Example ... establishes (a) for the system S. To establish (b), assume for the purposes of this argument that my fridge has a microprocessor in it running a bit of code (Footnote 3). We may conclude (c) that the implementation S has a ``software reliability of exactly 1''. Therefore this implementation of S is a counterexample to the claim of (Many93).
That's ideal for a counterexample. The purpose is to refute certain claims, which it does.
(Bit0.5)
The refutation is all the more clear, the simpler the counterexample. There are few places in which to question the argument.
Maybe, or maybe not, but the problem does not then go away. The wedge has been driven, and so the issue is altered. The question now becomes what kinds of specification/system pairs allow an a priori demonstration of correctness? We now know, contrary to Fetzer, that there are at least some. We argue below that systems with specifications of the sort in Section An Example ... are not the only kinds of systems for which it may be possible to provide a rigorous absolute correctness proof. A careful statement delineating for which systems it is possible to provide absolute correctness proofs would be a valuable scientific contribution to system engineering. I shall only be able to provide suggestions.
One may refer to the works and correspondence cited to see that this statement is untrue. (My experience is that few people are aware of the existence of counterexamples until it's pointed out.)
Maybe, maybe not. But this is no argument against getting the basics right, before proceeding to grander questions. `Interesting' and `useful' refer to qualities of software about which I don't yet know how to be precise. I argue below that mathematical software of certain kinds, for example a GCD algorithm, may lend itself to being proved correct. One may argue that this sort of mathematical software is both interesting and useful.
This is a plausible claim, in that there are some systems S for which this is true. However, it is false for the example S above, as we show in For Some Systems, There Is No Misfit. One reason (the objection runs) that the two statements above might be different, is that a specification might be incorrect. I need to show that in the case of S above, this cannot happen. I shall provide an argument that S is correct if and only if S implements its specification, for this particular system and specification above. To do that, I shall investigate further what ` S is correct' might possibly mean, in Concerning Systems, Correctness and Failure and subsequently.
First, there is a notion of proof, since Frege, which is not at all vague, but on the contrary has a precise technical definition. Second, note that proving system S correct is shorthand for proving that system S is correct, which is in turn shorthand for proving the following proposition: System S is correct. In other words, we're not `proving systems', we're proving propositions about systems. The usual mathematical, logical, or philosophical notion of proof is involved.
Is the term correct vague when applied to systems? I argue in For Some Systems, There Is No Misfit that although this term may be vague in some uses, it's not vague when applied to the system S of An Example .... In fact I claim that the only reasonable meaning in this case is that S implements the specification. In the meantime, to consider whether this point is basically sound, the reader might ask herself: if not this, then what else could correctness of the S in An Example ... possibly consist in?
These considerations might be of social-psychological importance, but they are irrelevant to the logic of the argument. When refuting a claim technically, only logical relevance counts. (This is not to discount the importance of persuading people. Correct technical reasoning should be the appropriate way of persuading scientists, mathematicians and philosophers of the truth of a claim.)
This is another social claim which is irrelevant to the validity of the technical claim. I also urge caution when conflating word-arguments with `opinions'. Arguments made using the written word, when they are precise, have the same logical status as arguments made using mathematical symbols. `Opinions' are generally taken to be points-of-view with which legitimate disagreement is possible. The technical arguments made in this paper are not intended to be of that kind (they may nevertheless be wrong if they contain mistakes).
Claims were made by Fetzer and others which are provably false. It is important to produce a counterexample when one exists. And by putting the arguments on a more technical level, it should be possible to define more carefully the principles underlying the engineering of systems. Social-psychological issues are addressed further in The Rhetorical Style of Computer Science.
The first alternative has also been argued using examples, in the context of programs, by Thimbleby (T93). I won't at first attempt to choose between the two alternatives, although after more discussion a preference for the latter will present itself. in Analysing Arguments. It suffices at first to establish the point that you can't really tell what a system is without also having some idea about how it's supposed to behave. This highlights an influential misconception entertained both by Fetzer and by some of his critics. I want to establish that when we're talking about system correctness, there aren't just physical bits and pieces to consider, there are also more abstract entities like propositional assertions (Footnote 6). The proposal does not preclude that there might be further parameters of the correctness predicate (for example, `the environment', other system specifications). The argument for the proposition relies on a principle of logical form due to Frege, and a criterion I'll dedicate to Russell.
The Fregean Principle led to the current notation for logic. The Russell Criterion says that if there's a functional dependency of assertion P on something with logical form F, then either F appears as an explicit argument, or there's some list A_1, ...., A_n of arguments to P such that the denotations of A_1, ...., A_n are all functionally dependent on the denotation of F, and their values, when ``plugged in'' to yield the value of P, yield the right value of P given the value of F. Using this criterion, the argument proceeds as follows:
Suppose my fridge implements the following specification Psi:
Suppose correctness is a unary property that holds, or not, of system S, my fridge. That is, a logical form of a correctness assertion is Correct(S). I omitted to say earlier that the fridge has no plug to obtain electricity. So it doesn't keep things cold. (Footnote 8) So it's not correct. That is, ~Correct(S). But there is a proof in An Example ... that the system is correct, that is, Correct(S). Taken at face value, this would violate Leibniz's Law that an object does not have two contrary properties at the same time (Footnote 9). The apparent logical form of the correctness assertion, its surface syntax, which we had written Correct(S) with the given logical type, does not seem to reflect the functional dependence on the specifications Pi Psi. These considerations show that the correctness-of-S assertion is functionally dependent on the requirement - in the first case, Pi: that any behavior at all is allowed; in the second, Psi: that it keeps things cold. Using Russell's Principle, we may conclude that there is some parameter or parameters of the logical type which reflects this functional dependency on Pi and Psi. There are four possibilities:
(Bit1)
What are the reasons for preferring one of these solutions over another? Before I consider them, I must clarify what is meant by `absorbing the type', that a functional dependence on the specification is already somehow `included' in either term or predicate.
that if P is a part of S and vice versa, then P and S are identical:
and that Part is transitive:
Thus, Part is a reflexive, antisymmetric, transitive relation, i.e. a non-strict partial order. I shall follow (Sim87) and denote it < in infix notation. The corresponding relation of proper Part, which is irreflexive and transitive, is denoted by <:. (They are related by the properties x <: y <=> x < y & ~(x = y) and x < y <=> x <: y \/ x = y.) Parthood is not generally a linear order - my arm is part of me, as is my other arm, but neither is part of the other.
It's widely accepted that two different parts of an entity can be considered together as one part, different from either component (Sim87). This larger single part is called the mereological sum of the two components, and I denote the operation of forming the mereological sum by ++. What kind of thing is P ++ S, exactly? One way of helping to define it implicitly, also the notion of part itself, is by means of axioms. Some common axioms are that the mereological sum of something with itself is just itself,
that mereological sum is commutative and associative,
that mereological sum is an upper bound (in the sense of $<$) to each argument, which follows from commutativity and
and also increasing in both arguments: a mereological sum of parts of P and S is itself a part of P ++ S,
Since there are many things that may have parts of P and S as parts, P ++ S is usually conceived as some smallest entity which has all the parts of P and S as parts. The axioms above already entail that all the parts of P and S, as well as their mereological sums, are parts of P ++ S. We can require that P ++ S is smallest by saying that the parts of the mereological sum are precisely these and no more:
Some of these properties will be used in Analysing Arguments.
If we extend the notation, we may express the property that an entity is the mereological sum of its parts by
However, in order to make sense of this operation with an undetermined, maybe infinite, number of arguments, it is easiest (and common) to treat the prefix ++ as a unary operator on classes. But this requires that some set theory be introduced, including perhaps the existence of an infinite class, so that one can treat an unbounded class of parts, and it may be preferable to see what can be done without it.
The Russell Criterion does not require that if an entity is functionally dependent on another, that the logical form of the first must expressly include a term denoting the second. The criterion would be absurd if it were to! For example, nonempty subclasses can reasonably be considered parts of classes (Lew91), and a class can be considered to be functionally dependent on its class of subclasses (by the axiom of extensionality for classes). There are exponentially many subclasses in the size of a class, not to speak of infinitely many for infinite classes. We would never be able to define the logical form of any term denoting a class with infinitely many parts if we were expressly to include denotations for each part in the term itself.
Since it seems to make little sense to include the parts of an entity as explicit arguments to its logical form, one can consider the possibility that I called `absorbing the type', which retains the `naive' logical form, Correct(S). Namely, the specification is considered to be a part of either of the entities - predicate or term - occurring in the assertion. If so, which entity could it be considered part of? Considering parts of a predicate requires considering what a predicate is - for example, a predicate could be identified with the class of all entities (or tuples of entities in the case of a multi-argument predicate) which satisfy that predicate, that is, of which the predicate is truly predicated. If we make this move, and we also have a notion of what parts classes have, as in (Lew91), then we are able to say what parts predicates have. This is a fairly involved move, dependent on the notion of class. In contrast, in the case of concrete physical entities, the intuitions about what are parts and what are not are much more firmly grounded. For example, we noted that a system has `natural' parts consisting of hardware and software.
Options a' and c preserve the `naive' form of the correctness assertion and the logical types of its subexpressions. However, a' rests upon an analysis of what the parts of a predicate might consist of. It is a false option. Here is the argument dismissing it.
Objects have parts. One way in which we could express the `parts' of a predicate would be to reinterpret it as an object of some sort and then consider its parts. The standard way to interpret a predicate as an object is to interpret it as a set-theoretic object, accomplished by considering it as a set of ordered argument-value pairs. Given that the parts of classes are subclasses (but the converse does not necessarily hold (Lew91)), all the parts of the predicate considered as a class are subclasses of it. Reinterpreting, the parts are logically strictly stronger predicates, having the same logical type as the predicate itself. Thus, were a specification to be a part of the predicate Correct, it would have the logical form of a unary predicate of systems, with logical type systems -> {True,False}. That is, we would be able to apply a specification to a system to obtain an assertion: namely, Pi(S), Psi(S). This has a natural interpretation: to assert that a system is correct is actually to assert that it satisfies the specification, which is to assert the specification of the system. And Pi(S) is True, Psi(S) is False. This is plausible, but before being seduced by this consequence of the argument, note that we have argued that a specification is a part of the predicate Correct, and that `part' entails `subclass of'. That reasoning must hold for every specification. Assuming that any specification is in principle permissible, we may choose the logically-true specification Pi, which is universal, and which also satisfies Pi < Correct, and conclude that Pi = Correct, that Correct is also the universal predicate of its type: every assertion Correct(S) of any system S must be True. This is implausible: it renders every assertion of correctness vacuously true. I conclude that this proposal doesn't allow us to make any sense of option a', and so I consider it no further. Some division of systems into parts such as hardware and software are recognised and in habitual use already. Thus option c, unlike option a', is plausible, Consider option b. It entails that the expression Sys has type
In categories of logical types, it is well-known that one can move from something of type (A x B) -> C to something of type (A -> B) -> C (by a `Currying' transformation, named after Haskell Curry), and vice versa, by means of natural transformations from the category of types into itself. No matter which category of types one prefers, these operations are admissible. Hence we can conclude that the type of Correct is isomorphic to the type
which is exactly the type of C of option a. Thus there is little argument of a logical nature that would dispose us to choose between options a and b. We may freely choose option a as representative of both, solely for aesthetic reasons.
We thus need to consider only options a and c. I have established the proposition claimed at the beginning of this section. Now to use it, in developing an argument for preferring option c out of an analysis of the kinds of arguments made by system engineers.
Before doing so, there is a particular interpretation of option a to note. One can take the operator C, which takes specifications and systems and produces a truth value, simply to be the apply operator: C(Theta,S) == Theta(S) (where `==' means `defined to be '). (Apply is a well-studied logical operator from Lambda-calculus and combinatory logic, where it is represented by concatenation). When considering option a, it may make sense to interpret Correct_1(S) as Pi(S) and Correct_2(S) as Psi(S). Nevertheless, further discussion will now show that this is a less desirable option than that of c.
Option c says that
and therefore that a correctness predicate has type
Are there good reasons to prefer one option over the other?
Observe that hardware and software are mutable: that is, one can (with some work) implement many software functions in hardware if one chooses (this is done with special processors, such as those used in avionics (MS95)), or, more obviously, one can implement any given hardware function in software (as is done with RISC processors). Thus, for any given system design, there is a choice of what functionality to implement in software and what in hardware. In my view, this bolsters the argument that hardware and software can be considered as parts of a system: they are used mutably to implement various parts of the system design, and different parts of the design have the same logical type (whatever this may be - something like a description of functionality). However, this argument cannot be made for the specifications Pi and Psi. A specification is in general some form of predicate with an argument of type system. It may or may not look like a `description of functionality', and it does not appear intuitively to be mutable with hardware or software, in the manner in which these latter are with each other. One may judge this analysis abstruse and having little to do with practical system engineering. In fact, it may be used in analysing arguments such as the following, (Footnote 11) whose analysis will lead us to prefer option c:
``This computational system failed. Its hardware didn't fail.
But the system is composed of hardware and software.
Therefore the software must have failed.''
The surface logical form of the argument as presented is:
The argument cannot be seen to be valid in this form because the conclusion does not follow from the premises. So, either the logical form of the premises doesn't exhibit enough to enable the inference, or there must be missing premises. The goal of a logical analysis is to analyse the forms of the assertions, and to fill in missing premises if necessary. We shall do so.
The classification of failures in critical computer systems has been studied (Lut93.1) (Footnote 12). Over 95% of these failures were neither failures in the hardware, nor failures of the software to meet its specifications. They occurred in circumstances which were unaccounted for in the specifications and may reasonably be regarded as failures in the specification. Parnas counts specification in the category of `documentation' (P90). As noted in A Proposition About Correctness that the specification of a system plays a necessary role in assessing the correctness of the system. It should not then be surprising to find the specification playing a logical role in attribution of system failure. But the formal version of the failure argument given above does not reflect this. We could introduce the role of the specification in the ways envisaged in Failures, Faults and their Logical Interconnections. For the present purpose, I shall simply take the predicate Faulty to be a contrary to Correct, that is
and replace the predicate `Failed' by `Faulty' in the argument under discussion. I now consider rectifying the argument under the options c and then a.
``This computational system has failed to fulfil its
expectations.
Its hardware didn't fail.
But it's composed of hardware and software, plus
a description of its requirements.
Therefore the software has failed.''
Formally, this argument is
The rephrasing is clearly lacking in that its conclusion is too strong. Cognisant of where faults in critical systems may lie, we could expect the argument to read:
``This computational system has failed to fulfil its
expectations.
Its hardware didn't fail.
But it's composed of hardware and
software, plus a description of its requirements.
Therefore either the software
has failed or the requirements description failed.''
The formal version is
This indicates what premise to add in order to turn this into a correct argument, namely
Likewise, the addition of the premise Faulty(S) <=> Faulty(hardware(S)) \/ Faulty(software(S)) would have turned the original argument into a valid argument. (If one is worried about the simplicity of this reasoning, note that any propositional argument at this low level of syntactic complexity is bound to look trivial.) But instead of this formula being included as a premise, the assertion was included that S is a mereological sum of parts. This suggests that the missing principle on which the argument is based is that
Clearly this generalises. The general principle is that if a system is the mereological sum of parts, and it fails, then at least one of the parts must have failed.
Call this the part-failure equivalence condition. The axioms of mereology in Absorbing the Type support the tenability of this equivalence scheme, in which N varies, and which has to remain appropriate as systems grow to include more parts. It would be logically incoherent without the properties that the axioms ensure. The axioms ensure that ++ is an idempotent, commutative, associative operation, as is disjunction; and that if the system S that failed is part of a larger system T, S < T, then T may consistently also satisfy the part-failure equivalence condition (Footnote 13). For example,
So, if the part-failure equivalence holds for T, and also for S as a part of T, then it also holds for the finer decomposition of T into P and the parts R and V of S (which are also of course parts of T). Which version of the part-failure equivalence is used will depend on the parts into which the system is decomposed. Picking irredundant parts is a good idea - for example, there would be no need to include a part S such that S = R ++ V when R and V are already included.
I shall leave the Spec argument place in the Faulty predicate undetermined for the moment. The part-failure equivalence principle would read
Alone, this does not suffice to turn the argument into a valid argument, since the specification is no longer a part of the system and so the modified part-failure equivalence principle doesn't provide the needed premise: taking the original argument
(Bit2)
and replacing S = hardware_S ++ software_S by
yields only the same conclusion - the original conclusion (and faulty because too strong) to the argument. With this form of the predicate, the easy way of saying that the specification itself might be at fault seems to elude us. Maybe a deeper analysis will suffice. One of the first questions is: what should be the arguments replacing `??' in the assertions Faulty(??,hardware_S) and Faulty(??, software_S)?
(Bit3)
If they are identical to Spec_S, the specification of the whole system, then one should search for a reasonable intuitive interpretation of what the assertion means, that the hardware failed to fulfil the system specification. The hardware is only part of a system, and the specification could use terms that refer to parts that aren't in the hardware, and these terms would either have to remain uninstantiated, or be quantified over.
For example, suppose your bicycle has only a front wheel, and its specification states, amongst other things, that it shall have a front and a rear wheel. Consider the front wheel as a part, and the term denoting the rear wheel as uninstantiated. This cannot form an assertion that the front wheel succeeds or fails to fulfil the specification, since there are uninstantiated terms. And no such question can be asked of the `missing' rear wheel, because there is no such thing! Consider now the alternative, that the term standing for the rear wheel becomes existentially quantified. There exists a rear-wheel such that ... . But in fact there isn't, so the specification is unfulfilled (is false). But the specification for the front wheel intuitively is supposed to be true! - the front wheel does fulfil its obligations as a part of the system. So this option doesn't provide us with an appropriate assertion either. We remain without a suggestion as to what the argument should be or how it should be interpreted in an assertion Faulty(??, part-of_S).
Since Faulty is a contrary to C, one could rather ask the converse question: what would be the missing parameter in an assertion of correctness, C(??,hardware_S) and C(??, software_S)? In this form, one is tempted to say that an assertion of correctness of part of a system could consist in saying that the part satisfies that part of the specification restricted or relativised to itself in some manner other than by quantifying over the variables not in the domain of the part. In order to complete the argument we're investigating in a manner similar to the way in which we were able to complete it for the form Correct(S), we need to be able to say that a failure in the system must be a failure in a part of the system to satisfy the specification `relativised to itself'. This is not so clearly possible - it's folklore that one can have components which work wonderfully as components but which have complex undesired interactions when composed together. I shall nevertheless attempt it in Some Decomposition Principles for Specification-as-Argument.
No matter what we may decide for the missing arguments to Faulty, a major problem in system analysis is how to compose specifications (AL93). One outcome of the research into composition may be that one will be able to define the design of a system as the composition of the specifications of its components. This helps in a failure analysis, as demonstrated by (G84) and (R87) (with a correction in (GSW89)) for relatively straightforward hardware components. However, until we have a definition and clear idea of the properties of such a general composition, we will not know whether, for example, the composition contains more information than is contained in the system specification (e.g. the composition logically implies the system specification but is not logically implied by it).
It seems thus that there are significant problems to be solved before one can add the `missing premises' to the argument interpreted according to option a:
This condition is simply not satisfied for certain fault-tolerant systems. Certain significant parts of the system are designed to satisfy their specification even though a particular part of them is faulty. It is therefore more appropriate to consider only `trusted' parts of a system. These are parts such that, if they fail to fulfil their function, the system may fail to fulfil its function. It would be an engineering decision to say what the trusted parts of a system would be. Suppose, without loss of generality, that the trusted parts are selected by a predicate A, and constrain the relative correctness assertion to be
This implies the assertion
We also need to ensure that the trusted parts compose the system,
Supposing we have figured out what the `composition' operator for specifications of trusted parts is supposed to be (let's denote it by **) (Footnote 14), we can expect that if the system is correct, and the specifications of all the trusted parts are correct, that when the composition of the specifications of the individual trusted parts is taken, the specification of the whole system is satisfied. That is to say, the composition must imply the specification of the whole system. This must hold no matter what the decomposition into trusted parts, so long as the system is the mereological sum of the parts. If not, and the composition does not imply the system specification, then the system specification may be false in circumstances in which the composition is true. That is, even though all the trusted parts fulfil their joint specification, the system may fail to fulfil its specification. This is contrary to our intuition of what the composition operator should satisfy. The condition that the system specification cannot impose a more stringent logical condition on the system than is collectively imposed on the parts is given by:
(Bit4)
Thus, the condition on the overall system can be formulated as
This yields an extra disjunction in the conclusion to the failure argument, namely that
which is equivalent to
which says in particular that there is a possibility that the composition of the specifications of the parts are not strong enough to guarantee that the specification of the system as a whole is fulfilled. This is certainly a point that has to be addressed in any verification that separates a system into parts, but this still does not give us the original missing disjunct in the conclusion of the failure-argument, which problem we're still left with.
The ability to formulate the failure-argument simply, and simply to supply the correct conclusion containing a disjunct stating that the specification may be lacking, is a powerful reason for preferring option c when reasoning about systems and failure. Option a can only become a viable choice if
When these problems are solved, we may at that time choose to reevaluate the present evidence in favor of option c. In the meantime, the current argument establishes that it is preferable to consider the specification as part of the system, as in option c (Footnote 15).
For example, the supposed-use of a modern transport aircraft such as an Airbus A320 or a Boeing 777 should include that it shall fly without crashing, and always land and take off successfully. We can tell that these are components of the aircraft's supposed use, by supposing (counterfactually, we hope) that some such airplane crashes while the pilots are doing everything they should be doing. We're likely to find ourselves saying `Well, the airplane shouldn't have crashed in those circumstances', thereby stating part of its supposed-use with perfect hindsight. Notice that the supposed-use isn't a thing that is itself correct or not correct. It's posited, it simply is. One can point to it, but one cannot say it's right or wrong in itself. In contrast, one can say that a formal-spec is incorrect, by matching it against what one knows of the supposed-use and finding discrepancies.
One inconvenient feature of supposed-use is that various aspects of it may only be discovered with hindsight. So one would be truly surprised to find all these expectations which I am calling the supposed-use explicitly written in the requirements specification, the formal-spec, for any device with a moderately complex working environment such as an airplane. But one goal of the requirements-capture process is somehow to express as much of the supposed-use as possible into more technical terminology in the formal-spec, in such a way that formal reasoning may be used to connect these formally expressed requirements with the design and other formal specifications of the system. I recall the evidence that most failures of systems designed for high reliability were traced to a misfit between ultimate user expectations (the supposed-use) and the `official' requirements specification (the formal-spec) (Lut93.1), (Lut93.2).
It is standard in system specification to interpret the definition of a system as defining a single set of behaviors. If a definition of a system defines a non-trivial class of sets of behaviors (i.e. a class with at least two members), then the specification can rightly be regarded as ambiguous - there is more than one set of system behaviors that the specification allows - and I shall call it so. The relation of match compares a formal-spec with a supposed-use. Suppose both are unambiguous. Subtle relations have been defined, such as the relation of Refinement Mapping (AL91) (coupled with logical implication), or the relations of Modal Process Logic (Bru93). Both of these methods require that the supposed-use is formally described. They are also not symmetric relations, and an intuitive consideration of match would suggest that it should be symmetric (that whatever the order in which its arguments are written, its truth value is the same).
Intuitively, the purpose of `matching' formal-spec and supposed-use is that we intend that they shall say `the same thing'. That suggests that they share the same language, and as far as possible denote the same set of behaviors. When can they be said not to match? Suppose that a system satisfies its formal-spec. How can the formal-spec not match the supposed-use? Intuitively, only if there is some system behavior B which is outside the range of behaviors allowed by the supposed-use. If the supposed-use is not ambiguous, it defines a unique set of behaviors and the behavior B must be outside this set. If the supposed-use is ambiguous, then it is somewhat harder to define what is meant by the `range of behavior' allowed by the supposed-use. If B is outside of all of the sets in the class of sets of behaviors corresponding to the supposed-use, then it is clear that B is outside the `range of behaviors' allowed.
I shall be interested in exhibiting examples in which the formal-spec and supposed-use match. I therefore choose my cases to be unambiguous, that is, both the formal-spec and the supposed-use define a unique set of behaviors. How should these behaviors be related? Considering unambiguous formal-spec and supposed-use, one can see that the formal-spec only matches the supposed-use, when there is no behavior consistent with the formal-spec that is inconsistent by the supposed-use. That is to say, the set of behaviors allowed by the formal-spec is a subset of the behaviors allowed by the supposed-use. The relations of (AL91) and (Bru93) both satisfy this condition, although since they are primarily concerned with comparing a design specification (an `implementation') with a requirements specification (rather than two specifications supposed to be saying the same thing), they include extra apparatus to cope with the difference in language normally to be found in different sorts of system specifications.
If the formal-spec actually logically entails the supposed-use, then the behaviors described by the former are indeed a subclass of the behaviors described by the latter. That is, for unambiguous Spec_S and supposed-use_S,
Furthermore, if Spec_S and supposed-use_S are written in the same language, and we accept that match in this case is symmetric,
then we can conclude that match is a predicate of predicates including the relation of logically equivalence on predicates:
May this implication be reversed? Certainly, in the case in which the formal-spec is identical with the supposed-use, they must satisfy both the extensional condition (that there is no behavior allowed by the one that is inconsistent with the other) as well as any intentional conditions that refer simply to differences in syntax. I can see no argument that suggests that identical formal-spec and supposed-use may not match. Further, if one has in mind a supposed-use for a system that any behavior is allowed, that is, a logically-true requirement, then any logically-true predicate should suffice for the formal specification, as suggested implicitly in An Example .... The relation of matching deserves some further logical investigation. However, the present discussion suffices for establishing the final argument that the example in An Example ... does the job required. That argument consists in asserting, for S the system in An Example ..., that match(Pi(S),supposed-use_{S}).
If S implements its specification but nevertheless S is not correct, then it must be the case, in the vocabulary of The Logical Relation Between Supposed-Use and Formal-Spec how, in the case of supposed-use chosen to be a logically-true predicate, a formal-spec, namely any logically-true predicate, may be chosen in order to fulfil the criteria of matching. Therefore, they match (from Proving Software Correct. In any case in which we can establish the logical or a priori status of supposed-use, we stand a chance of being able to define a formal-spec which matches, and thereby eliminate one source of error in building such systems.
Note that proving the logical equivalence of the supposed-use with the formal-spec does not, by itself, guarantee the correctness of a system. For example, such a system may be radically altered by an explosion, and the system in its transmuted state may fail to satisfy the formal-spec, and therefore equivalently the supposed-use. But for System S, this does not destroy the correctness property, because its supposed-use is logically true, and nothing whatever that happens to a system can cause it not to satisfy a logically true supposed-use.
Consider a bicycle, and a new worker in the bicycle-building factory, who is given instructions on how to join components together. Spokes fit into hubs, rims fit on spokes, tires fit on rims, wheels fit here on the frame, they also fit here ...... He makes the bicycle, but it can't be ridden because he has only attached one wheel. The requirements happened to state which bits go in which places, but failed to state that all the right bits must go in all the right places. (Footnote 17) He has built a machine to formal-spec, but formal-spec and supposed-use don't match. And there's no software in sight. So we have no plausible reason to call this kind of failure a `failure concerning software'.
According to the proposition discussed in A Proposition About Correctness, this may not be the whole truth. If we include the supposed-use in with the definition of system as a formal-spec, then the system is only partially physical. It's also partially non-physical. The expectations of its function are not physical objects, but they are part of the system.
On the other hand, taking option c, in which we don't include the supposed-use in with the physical parts, although the `implemented system' may be considered to be physical, proving the system correct involves comparing some non-physical things, namely observations of behaviour, with the supposed-use. This may only be done via the intermediation of descriptions of this behaviour and propositions expressing supposed-use. Looking at properties of physical parts alone does not suffice. This leads to the second set of considerations.
In talking about system behaviour, or comparing it with something else, one does not grasp the behaviour directly (since it comes and goes), but through a description. This is because one is considering timeless properties of behaviour - one wants to make observations of the behaviour and refer to those observations at some later time. So let's give ourselves more room, and talk about descriptions of behaviour, rather than just behaviour.
The Fetzertion rests on the intuition that we predict individual behaviors, rather than properties of behaviors. However, predictions are usually made through a description of some sort. And many behaviors may satisfy even a description thought to be narrow - for example, behaviors which exhibit features which are not talked about in the description (e.g., which are not considered relevant). Predictions generally are descriptions, rather than specifications of individual behaviors.
If we consider descriptions of behaviour which a system may contingently satisfy, such as behaving this way rather than that other way, then there are undoubtedly some such behaviours which cannot be predicted with certainty. This is roughly what `contingent' means.
There are descriptions of behavior which are logically true. It may successfully be predicted that a system will satisfy any logically-true specification. Suppose there are descriptions which are in fact always true, even although they might not be logically true, for example, descriptions which assert, amongst other things, that there exists an infinite set (Footnote 18). Similar considerations to those for logically-true predicates show that it may be asserted that any future behavior will satisfy such a predicate.
There is also a quirky possibility which one may consider. Suppose one believes in (Leibnizian) determinism, that is, that the future states of the universe are uniquely determinable from any past configuration. Then presumably one can predict the future states even of our system exhibiting `contingent' behaviour by knowing sufficiently much about the past state of the world. Most philosophers and scientists are not determinists, but nevertheless respectable versions of determinism have been offered. A determinist argument directly contradicts Fetzer's conclusion, even for contingent behavioural descriptions.
Thus, whether `future behavior' can be predicted with certainty rests on the logical or metalogical status of descriptions under which one considers that behavior, including whether one is prepared to allow `synthetic a priori' assertions, or Leibnizian determinism. It is the kind of question that analysis shows to have less and less precise a meaning the more one considers the possibilities.
where in this case S is instantiated as software_{airplane}; Initial-State_S is a rigid variable (one that does not change its value from state to state) and State is a state function (Footnote 20). The fact that the software-irrelevance principle corresponds to a valid argument of temporal logic does not by itself entail that the software-irrelevance principle is a metaphysical truth. The semantics used for the interpretation into temporal logic must first be shown to be appropriate, for example, that a state is an appropriate object that may be denoted by a term, that the predicate of failure can take an argument that denotes the state, and that the logic is extensional, that is, that the assertion that a state is failed depends only upon the state and not upon the term chosen to denote the state. So, without these principles having been established, a translation of the software-irrelevance principle into a logical truth of temporal logic serves only to show it is consistent with and follows from these general ontological and semantic principles of temporal logic, and not to establish it outright. An argument for the truth of the ontological and semantic principles of temporal logic to talk about action systems has been attempted in (Lad94). However, principles such as the software-irrelevance principle have been used to justify these principles, rather than the other way round.
This explanation holds that any potential violation of the principle must be accomodated by, for example, changing (or `clarifying') the meanings of the terms involved. For example, we can't mean `environment', `state', `state property' unless we also agree to accept the invariance principle that comes with them. This kind of approach to clarifying the use of terminology in context is related to a thesis of Duhem that has been argued eloquently by Quine - see e.g., (Q60) (Footnote 21).
So such principles may have the certainty of mathematical truths in part because we commit to their continuing truth in the future. We would rather adjust to a potential violation by changing slightly what we had taken `environment' to denote, than by giving up the invariance principle (Footnote 22). A commitment now to a principle to hold in the future may indeed be made, and may be kept or violated. But when a commitment is violated, we do not locate the fault in the meaning of what is committed to, but in the sincerity of the actor. If I break a promise, then it is inappropriate to say `Oh, I discovered the promise was wrong', since promises are not the kind of things that are wrong or right (unlike declarative statements). They are the kind of things that are kept or broken. The investigation of different kinds of acts such as promises, declarations, commitments, that may be performed with language is known as the study of speech acts, originated by Austin (A75) (A78), and developed particularly by Searle (Sea69) (Sea79), who calls these acts of commitment Commissives.
Quine does not recognise certain categorical distinctions among sets of statements we take to be true, such as `a priori' truths and `contingent' truths. He has argued that the truth or falsity of statements has a similar semantic status, no matter how the statements are classified. When there are conflicts or misfits in our ways of using language, something has to give (Footnote 23). But we are more committed to retaining some kinds of statements (the logical truths) than others (The sun will rise tomorrow) and yet others (That driver will have an accident if he drives like that all day), because some kinds of revision cause greater reverberation in our conceptual schemes (Oh, dear, I suppose the law of non-contradiction is wrong!) than others (Oh, dear, maybe the mail didn't arrive today after all!).
Supposing that collections of statements form an interconnected web like this, with varying degrees of freedom in its various parts, gives us one way of seeing how we may predict that the software-irrelevance principle will not be violated in the future by our parked airplane. We are not making a prediction in the contingent sense, but instead undertaking not to violate the principle; if necessary, to push and pull the web of interconnected statements in a different manner to preserve the principle, adjusting other definitions, terms and truth values in order to take up the slack. (Equally, if this account of statements-as-a-web is mistaken, that does not destroy our argument concerning the inviolability of the airplane safety property, but merely a suggested explanation of how it happens.)
Looking where we locate the faults therefore helps us elucidate our commitments to principles such as the principle of invariance. It shows us how, and in what way, we may push and pull the web of meanings, and which parts we choose to remain fixed. This may help us to explain how physical things may be proven to satisfy certain properties. Notice, though, that properties may not be chosen at random to be violated or not. The web imposes constraints, and we can describe those constraints. However, if one accepts arguments similar to those of Quine, ultimately we have only limited means to try to understand the world view of someone who, when the postman doesn't call as expected, chooses instead to give up the law of non-contradiction rather than his belief (to us, false) about the postman having arrived.
It follows that a failure which is not a transient failure is one which is not self-rectified in the course of time. Failures which are not transient failures lead to permanent disablement of the system until they are rectified, and therefore a non-transient failure may be characterised as one which persists. I shall apply the predicate Faulty to a system which is exhibiting such a non-self-rectifying problem, and capture this property by the axiom of temporal logic (Footnote 24).
in which [] is the `always' operator of temporal logic: []A is true, where A is an assertion, if and only if A is true in all states from now throughout the future. Also, since Faulty is a persistent property of systems, I shall take it to satisfy the axiom
From now on, I shall also take Faulty as the converse of Correct, namely
We have considered the following valid argument of temporal logic:
where Initial-State_S is a rigid variable and State is a state function. I have noted that this principle is supported by an ontology of system states. If systems have states, we may connect the argument to the current definitions of failure by the equivalence
With this definition, we may deduce from the above argument and the other definitions that the following argument is valid:
which yields the conclusion that we drew from intuition, that the problem with the software-inert airplane was a hardware problem. This is a pretty simple example, but it does show how conclusions which we individually draw by intuition may instead be founded on more general principles with wider justification.
I wish to avoid a discussion here of how my terms relate to the standard engineering terms. It suffices that they are roughly consistent with the standard terms, and that the axioms define just the properties of these terms that I shall use in the final example. If the reader does not concur with my choice of terms, she is free to use her own.
hardware |= gate-level implementation &
gate-level implementation ->_{RM} MC68020 code & amp;
MC68020 code ->_{RM} GCD in Assembler &
GCD in Assembler ->_{RM} GCD spec &
match(GCD spec,GCD)
where ->_{RM} means that there is a refinement mapping (AL91), an assignment of state functions of the left-hand side to variables on the right-hand side, so that the formula is provable in logic with the state functions of the Refinement Mapping replacing the variables on the right-hand side. Boyer and Yu omit the first two steps, and prove the next two. Let us assume that the second step is true. It is left to see whether hardware |= gate-level implementation and whether match(GCD spec,GCD). The first step I shall relate to the assertion ~Faulty(hardware). The next three constitute a proof that gate-level implementation ->_{RM} GCD spec because ->_{RM} is transitive (AL91). What remains is the assertion of match.
The Boyer-Yu example is not yet a complete proof of the *software correctness of a system implementing this description, since the proof that the actual gate-level design of the chip implements the architecture of the chip (the second step) is omitted. There is no problem of principle to provide it. For the real sceptics, we may note that Hunt has verified a gate-level design of a microprocessor (the FM8502) against its architectural specification (Hun87). The example could have been targeted to an FM8502 architecture.
In order for us to accept the Boyer-Yu assertion, which I'll take to be
as a theorem, we have to know that it indeed logically follows via the refinement mapping, that is, that if appropriate substitutions are made, that the right-hand side is a logical consequence of the left-hand side. Since the truth of this theorem for this example was obtained by the theorem prover Nqthm (BM88), we must establish that Nqthm is sound (that is, that it only obtains conclusions that logically follow from the premises). Soundness of Nqthm itself is not a trivial thing to establish. But on general principles which I will not develop, I claim that, if not Nqthm itself, then some modification of Nqthm is sound. Let's just suppose we are working with a sound version of Nqthm (the point of the example does not depend on this hypothesis in any crucial way that I can see).
The supposed-use of this system is to produce a result which satisfies a predicate of formal mathematics, which may be regarded as equivalent to a statement in a language with precise semantics (a logical language, with maybe certain extra axioms such as the axiom of infinity). As I argued in The Logical Relation Between Supposed-Use and Formal-Spec, it follows that
and therefore
For such precise mathematical uses, it would seem always to be possible to make this move. Precisely which uses of computation allow this maneuver is a topic of research that leads beyond this paper, but I suggest it could be a rich source of progress in system development technology, since the more we know about this, the easier it is to eliminate one potential source of error in system design.
The part-failure equivalence condition from The Specification as System Part and the definition of Correct as the converse to Faulty from Failures, Faults and their Logical Interconnections yield
To be able to assert Correct(S) from the part-failure equivalence condition, we are missing a proof that
i.e., that ~Faulty(hardware_GCD). To be able to assert Correct(S) from the five-step conjunction for this example, we lack a proof that
How are these two assertions related? To establish that it's plausible that
I return to the earlier discussion of commissives. Suppose we are assured through proof that gate-level implementation ->_{RM} GCD spec and that (through an argument yet to be given) match(GCD spec, GCD). If you build a physical system and implement this design on it and the physical system fails during operation, you won't under any circumstances talk to a *software engineer, you'll call someone with a hardware-tester who knows where to look for bubbles in the silicon. In the words of How Predictions May Be Rigorous, the (completed) Nqthm proof along with the assertion of matching entails that if anything goes wrong, it must be a physical fault in the physical chip. Note this principle also takes care of the possibility of random destruction of the chip (nuclear explosions nearby), gamma rays altering the values of bits, and other physical disturbances. I conclude that the assertions ~Faulty(hardware) and hardware |= gate-level implementationare equivalent. Thus, the five-step correctness assertion is correct, it corresponds to the part-failure equivalence principle, and the Boyer-Yu theorem is the most complex step in proving the correctness, the real correctness, of the GCD algorithm.
(Bit5)
Progress in a science may come about through the eventual resolution of technical conjectures on which there is legitimate initial disagreement. This may commit us to the principle that others with whom we disagree should be able eventually to demonstrate that our beliefs are incorrect, or vice versa. In contrast, it seems to me that many computer scientists have prefered to give up on the debate concerning proofs of system correctness. If we ignore a problem, we are thereby unlikely to obtain a universally agreed answer. If an answer is required for P=NP, why not also for the existence of provably functional systems?
I conjecture that the fuzziness of the debate on provably correct systems involves
None of these are trivial components either of computing science or of computing practice. A lack of common meaning to common expressions, and a deficit in effective methods of reaching agreement lead to a deficiency in the ability to tell truth from falsehood, and these factors, along with hidden interests as in (d), impede our ability as a profession correctly to analyse approaches to problems of major and sometimes life-critical interest. Feature (d) in particular strikes at the heart of progress in science. I have nothing technically relevant to say here concerning (d).
The ideas in this section bear no logical relation to the correctness of any formal claims about the possibility of system verification. Arguments and examples stand and fall on their own terms. Similarly, my discussion of (a) - (c) is logically independent of the argument I have given that there is a provably correct system.
In philosophy, a counter-claim to a general claim may frequently be argued in general terms, but once a potential weakness of an argument is located, the discussion soon turns to arguing the merits and demerits of examples. The rhetorical device of the counterexample is well established, as well as the device of the paradigm example.
In contrast, the discussion over Fetzer's paper generated not one specific example to test out the opposing views (MA89.1), (MA89.2), (MA89.3), (MA89.4), (MA89.5). A letter written by a collection of eminent researchers suggested mainly that it was irresponsible of the journal to publish the paper (AB89). A letter from other eminent researchers suggested that Fetzer was ignorant, without addressing his argument (PCG89). Fetzer noted correctly that both issues were irrelevant to his argument. It is likely that such commentary, had it been submitted to a mathematical or philosophical journal, would have been considered eccentric. Nevertheless, some serious attempts were made to address features Fetzer had illuminated, such as the social issues (DR89), or the relation between formal mathematics and the physical world (Bar89).
So, some of the leading experts in the field to which Fetzer's argument was addressed felt, after due consideration, that an argument from authority was a more effective, and a more acceptable, way of refuting his claim than providing a counterexample (Footnote 25). I believe that neither philosophers nor mathematicians would make this particular choice of rhetorical device in a similar case in their own field, for two reasons. Firstly, it does not shed any light on the truth or falsity of the claims Fetzer made. Secondly, it does not shed any light on the quality of his argument (Is it complete? Full of holes? Irrelevant? If so, how and why? Can it be fixed? Is there a counterexample?).
The choice by computer scientists of which rhetorical devices to employ in refutation shows vast discrepancies between the current practice of computer science and the more established fields of mathematics and the philosophy of language, science, mathematics and logic. Yet some fundamental issues in computer science are very close to those considered in these other fields, as demonstrated by Fetzer's article, as well as by the increasing collaboration of neurobiologists, computer scientists, psychologists and philosophers in attempting to explain certain features of the mind and brain. (Consider, by way of contrast - when have philosophers since Aristotle attempted to shed light on, say, the technology of horticulture?)
We need to assess the effect of our choices of rhetorical device on the development of our science. After all the claims and counterclaims, do we know, or have we been persuaded, whether program verification is in principle impossible, or not? Further, are we provided with any insight into whether our current research or practice in computing and software engineering is foundationally shaky, or fundamentally sound?
Firstly, those of us who have worked in close liason with industrial software development are aware of the often palpable aversion expressed to the use of the rigorous standards and techniques of formal mathematics and technical philosophy. ``We've tried it, we messed it up, so we *know* it doesn't work''. ``Our engineers can't understand it, *therefore* it isn't any good''. ``It's *absurd* to believe that software can ever have a reliability of 1''. The authors of the `irresponsibility' reply to Fetzer (AB89) implicitly acknowledge that misleading ideas may propagate to the detriment of computing science and practice. So may insufficiently rigorous styles of rhetoric.
Secondly, experts in an essential area of computing practice apparently believe, as a consequence of their other beliefs, something which is demonstrably false (see Second Corollary: Software Reliability). We may conclude there is a part of their belief system which causes them to believe a falsehood. We may even erect a not-so-straw man, and have a fictitious such expert claim that he believes this falsehood follows from extensive experience and observations. The falsehood can only occur either because of a logical mistake somewhere, or because certain observations have been misinterpreted, because it is logically impossible to infer a falsehood from truths using logically correct inference rules. Therefore, should not `good engineering practice' rather dictate that we attempt to find out where that mistake is, or where observations have been misinterpreted?
I propose that rigor serves as important a role in good engineering practice as it does in mathematics, and for similar reasons.
Having established that some kinds of properties of the future behaviour of physical systems can admit of rigorous proof, I attempted further to explain how properties could attain that distinction. For example, properties may be logically true. Or, if the supposed use allows the possibility of pure hardware faults to occur, we can obtain provable satisfaction if supposed-use and formal-spec match (coined as a term of art), or match modulo some inviolable principle, and also if the full gate-level design matches the formal-spec. This used the notion of commitment to a proposition or principle, investigated by Searle and other speech act theorists, and in a different context by Duhem and Quine. In particular, I gave an informal argument to show that safety properties of physically-realised systems such as in a fly-by-wire transport airplane can admit of proof, if no liveness properties are specified. I offered a formalisation of some of the logical properties of the concepts involved.
The final discussion concerned an example of a computer design (complete with chip) which computes a useful function (a GCD), whose *software was proved correct (modulo a hardware architecture verification which has not yet been performed) by Boyer and Yu. I observed that for this case supposed-use and formal-spec may be taken to be identical. It follows that if a physical system using the appropriate chip implements this design and fails, the failure is in the physical hardware.
These arguments concern the foundations of system engineering and provide a refutation of certain views about those foundations and a vindication of others. My approach is that of philosophical logic. None of the arguments I presented attempted to address the epistomological argument of De Millo, Lipton and Perlis, that even though systems may be correct, it's doubtful for even moderately complicated systems that we could ever come to know this (DMP79).
Finally, I considered the styles of rhetoric currently employed in the science of computing. I indicated how they may fail to help us, for example by diverting us from searching for examples and counterexamples to establish or refute claims, and how therefore they may affect engineering standards.
(AL91): M. Abadi and L. Lamport, The Existence of Refinement Mappings, Theoretical Computer Science 82(2):253-284, May 1991. Back
(AL93): M. Abadi and L. Lamport, Composing Specifications ACM Transactions on Programming Languages and Systems 15(1):73-132, Jan 1993. Back
(AL94): M. Abadi and L. Lamport, An Old-Fashioned Recipe for Real Time ACM Transactions on Programming Languages and Systems 16(5):1543-1571, Sep 1994. Back
(AS85): B. Alpern and F. B. Schneider,, Defining Liveness, Information Processing Letters 21:181-185, Oct 1985. Back
(AS87): B. Alpern and F. B. Schneider,, Recognizing Safety and Liveness, Distributed Computing 2:117-126, 1987. Back
(A93), T. Anderson, Private communication, 1993. Back
(AB89): M. Ardis, V. Basili, S. Gerhart, D. Good, D. Gries, R. Kemmerer, N. Leveson, D. Musfser, P. Neumann, and F. von Henke, Editorial Process Verification, Letter, Communications of the ACM 32(3):287-288, Mar 1989. Back
(A75): J. L. Austin, How to Do Things with Words, ed. Urmson & Sbiza, Oxford, Clarendon Press, 1975. Back
(A78): J. L. Austin, Philosophical Papers, ed. Warnock & Urmson, Oxford, Clarendon Press, 1978. Back
(Bar89): J. Barwise, Mathematical Proofs of Computer System Correctness, Notices of the American Mathematical Society 36(7):844-851, Sep 1989. Back
(BM79): R. S. Boyer and J. S. Moore, A Computational Logic Academic Press, 1979. Back
(BM88): R. S. Boyer and J. S. Moore, A Computational Logic Handbook Academic Press, 1988. Back
(BY92): R. S. Boyer and Yu Yuan, Automated Correctness Proofs of Machine Code Programs for a Commercial Microprocessor, in Automated Deduction: Proceedings of CADE11, pp416-430, ed. D. Kapur, volume 607 of Lecture Notes in Computer Science, Springer-Verlag, 1992. Back
(BYL95): R. S. Boyer and Yu Yuan, Proof of a GCD Algorithm on an MC68020, in http://www.rvs.uni-bielefeld.de. Adapted by P. B. Ladkin from (BY92) and comments of R. S. Boyer, 1995. Back
(Bru93): G. Bruns, Applying Process Refinement to a Safety-Relevant System, submitted for publication, May 1993. Back
(DLP79): R. DeMillo, R. Lipton, and A. Perlis, Social Processes and Proofs of Theorems and Programs, Communications of the ACM 22(5):271-280, May 1979. Back
(DR89): J. Dobson and B. Randell, Program Verification: Public Image and Private Reality, Communications of the ACM 32(4):420-422, Apr 1989. Back
(D73): M. Dummett, Frege: Philosophy of Language, Duckworth, 1973. Back
(D78): M. Dummett, Truth and Other Enigmas, Duckworth, 1978 Back
(Fet88): J. H. Fetzer, Program Verification: The Very Idea Communications of the ACM 31(9):1048-1063, Sep 1988. Back
(Fie89): H. Field, Realism, Mathematics and Modality, Basil Blackwell, Oxford, 1989. Back
(G84): M. R. Genesereth, The Use of Design Descriptions in Automated Diagnosis Artificial Intelligence 24:411-436, 1984. Back
(GSW89): R. Greiner, B. A. Smith and R. W. Wilkerson, A Correction to the Algorithm in Reiter's Theory of Diagnosis, Artificial Intelligence 41(1):79-88, Nov 1989. Back
(Humxx): D. Hume, A Treatise of Human Nature, ed. Selby-Bigge, Oxford, Clarendon Press, 1978. Back
(Hun87): W. A. Hunt, The Mechanical Verification of a Microprocessor Design, Research report CLI-6, Computational Logic, Inc., Austin, Texas, 1987. Back
(Lad93): P. B. Ladkin, Proving Systems Correct, Technical Report TR 103, Department of Computing Science and Mathematics, University of Stirling , Scotland, Mar 1993. Back
(Lad94): P. B. Ladkin, Why use Tense Logic to describe Digital Systems?, unpublished manuscript, 1994. Back
(Lad95): P. B. Ladkin, Formal but Lively Buffers in TLA+, Technical Report, Faculty of Technology, University of Bielefeld, 1995. Available from http://www.rvs.uni-bielefeld.de. Back
(Lad9x): P. B. Ladkin, A Review of Fetzer's Very Idea, to appear in http://www.rvs.uni-bielefeld.de. Back
(Lad96.1): P. B. Ladkin, On Needing Models, Technical Report, Faculty of Technology, University of Bielefeld, 1996. Available from http://www.rvs.uni-bielefeld.de. Back
(Lam77): L. Lamport, Proving the correctness of multiprocess programs IEEE Transactions on Software Engineering 2:125-143, 1977. Back
(Lam93): L. Lamport, Private correspondence, Mar 1993. Back
(Lam94): L. Lamport, The temporal logic of actions, ACM Transactions on Programming Languages and Systems 16(3):872-923, May 1994. Back
(Lev93): N. Leveson, Private group discussions, Jan 1993. Back
(Lew91): D. Lewis, Parts of Classes, Basil Blackwell, Oxford, 1991. Back
(Lut93.1): R. R. Lutz, Analyzing Software Requirements Errors in Safety-Critical Embedded Systems, in Proceedings of the IEEE International Symposium on Requirements Engineering, San Diego, CA, pp126-133, IEEE Computer Society Press, Jan 1993, Back
(Lut93.2): R. R. Lutz, Targeting Safety-Related Errors During Software Requirements Analysis, in SIGSOFT '93: Symposium on the Foundations of Software Engineering, Los Angeles, CA, ACM Software Engineering Notes, 18(5):99-106, Dec 1993. Back
(Mad92): P. Maddy, Realism in Mathematics Oxford, Clarendon Press, 1992. Back
(DM93): R. D. Dorsett and P. Mellor, The Airbus A320 Electrical Flight Control System, unpublished manuscript, 1993. Back
(MS95): S. P. Miller and M. Srivas, Formal Verification of the AAMP5 Microprocessor: A Case Study in the Industrial Use of Formal Methods, in Proceedings of WIFT'95, the Workshop on Industrial-Strength Formal Specification Techniques, IEEE Computer Society Press, 1995. Back
(Myc89): J. Mycielski, The Meaning of Pure Mathematics, Journal of Philosophical Logic 18:315-320, 1989. Back
(Myc92): J. Mycielski, Quantifier Free Versions of First Order Logic and Their Psychological Significance, Journal of Philosophical Logic 21:125-147, 1992. Back
(P90):
D. L. Parnas,
Documentation of Communications Services and Protocols,
in Formal Description Techniques II
ed. S. T. Vuong, pp277-280, North-Holland 1990.
Back
(PCG89):
L. Paulson, A. Cohn, and M. Gordon,
Letter, Communications of the ACM 32(3):375, Mar 1989.
Back
(Q52):
W. V. O. Quine,
Methods of Logic, page xii,
Routledge Kegan Paul, 1952.
Back
(Q60):
W. V. O. Quine,
Word and Object,
MIT Press, 1960.
Back
(Q61):
W. V. O. Quine,
Two Dogmas of Empiricism,
in From a Logical Point of View,
2nd edition, Harvard University Press, 1961.
Back
(R87):
R. Reiter,
A Theory of Diagnosis from First Principles,
Artificial Intelligence 32:57-95, 1987.
Back
(Rus93):
J. Rushby,
Private group communication, 1993.
Back
(Ryl63):
G. Ryle,
The Concept of Mind,
Penguin (Reprinted), 1963.
Back
(Sea69):
J. R. Searle,
Speech Acts: an essay in the philosophy of language,
Cambridge University Press, 1969.
Back
(Sea79):
J. R. Searle,
Expression and Meaning: studies in the theory of speech acts,
Cambridge University Press, 1979.
Back
(Sim87):
P. Simons,
Parts: A Study in Ontology,
Oxford, The Clarendon Press, 1987.
Back
(T93):
H. W. Thimbleby,
On Programming and Understanding Computers,
unpublished manuscript, 1993.
Back
(W86):
C. Wright,
Realism, Meaning and Truth
Basil Blackwell, Oxford, 1986.
Back
(Y92):
Yu Yuan,
Automated Proofs of Object Code for a Widely Used Microprocessor,
Ph.D. Thesis,
University of Texas at Austin, Dec 1992.
Available from University Microfilms, Ann Arbor, Michigan.
Back
(Y93):
Yu Yuan,
Automated Proofs of Object Code for a Widely Used Microprocessor,
Research Report 114,
Digital Equipment Corporation Systems Research Center,
Palo Alto, California, Oct 5, 1993.
Back
(MA89.1)
Many authors,
Correspondence,
Communications of the ACM 32(3):287-290,374-381, Mar 1989.
Back
(MA89.2)
Many authors,
Correspondence,
Communications of the ACM 32(4):506-512, Apr 1989.
Back
Many authors,
Correspondence,
Communications of the ACM 32(7):790-792, Jul 1989.
Back
Many authors,
Correspondence,
Communications of the ACM 32(8):920-921, Aug 1989.
Back
J. Dobson and B. Randell,
Letter,
Communications of the ACM 32(9):1047/1050, Sep 1989.
Back
(Many93):
T. Anderson, P. Ladkin, N. Leveson, P. Mellor and others,
Group email discussion,
Jan-Feb 1993.
Back
Footnote 2:
Footnote 3:
Footnote 4:
Footnote 5:
Footnote 6:
Footnote 7:
Footnote 8:
Footnote 9:
Footnote 10:
Footnote 11:
Footnote 12:
Footnote 13:
Footnote 14:
Footnote 15:
Footnote 16:
Footnote 17:
Footnote 18:
Footnote 19:
Footnote 20:
Footnote 21:
Footnote 22:
Footnote 23:
Footnote 24:
Footnote 25:
Footnotes
Footnote 1:
I call a property
or predicate logically valid or logically true
if everything has that property, that is, if any proposition
formed from that predicate by instantiating its argument place is true.
Similarly, I say that a predicate P logically entails another,
Q,
if every assertion formed by instantiating the argument place of P
logically entails the assertion formed by instantiating the argument
place of Q with the same argument.
Back
whatever that might be. We do not need to commit
ourselves. {\tt b} might include
some description of S's environment E and E's
behaviour also.
Back
Now if only the light would go on when I opened the door.
Back
I shall call this the specification.
The relation might be binary, but it's
more common to see more than one system specification,
for example a requirements specification and a design specification.
The relation may therefore become more complicated,
involving fit between various levels of
specification and its intended use. We won't pursue this further
here.
Back
I distinguish between
logical and mathematical entities because not everyone
agrees that mathematics is logic, even though most agree that if
anything is a priori, both logic and pure mathematics are.
One common argument that mathematics is not logic concerns the
posited existence of an infinite set, which is not held to be a
logical assertion, but seems to be necessary for much mathematics,
although see
.
Back
For example, what I am suggesting seems to strain
against the vocabulary used by Parnas
(P90),
who refers to
everything that is not in the physical hardware as `documentation'.
According to the proposition established in this section, some of the
`documentation' is either part of the system, or is an integral part
of talking about correctness. It seems more helpful therefore clearly
to distinguish between those parts of the documentation that might be
essential in this regard, and those other parts that aren't. We shall
do so subsequently.
Back
I shall be using
an analysis adapted to classical propositional logic. I propose that a similar
analysis may also be given with constructive, intuitionistic, logic.
Back
I confess, in truth my
fridge does plug in, contrary to what I just supposed, but nonetheless it
doesn't keep things cold.
Back
Proposition P is a contrary of
proposition Q if and only if P => ~Q.
Predicate P(-) is a contrary of predicate Q(-) if and only if
(forall x)(P(x) => ~Q(x)). A similar definition may be
given for predicates with multiple arguments. A property and its
negation are contraries.
Back
Actually, I should take the type to be
T x specifications -> {True,False}
where T represents a list of other functional dependencies
remaining yet unacknowledged.
However, it seems that these extra parameters could be handled
similarly to the way
I'm about to deal with the dependency on specifications,
so I shall omit T from the discussion.
Back
This argument was suggested as a typical example of
industrial failure reasoning by
(A93),
who, it should be remarked, does not endorse it!
Back
I'm indebted to John Rushby for this reference.
Back
Note, however, that
there are certain fault-tolerant system designs in which a subsystem
may fail but the whole does not, so the parts considered to
fulfil the equivalence must be selected with care.
Back
Using a single symbol for the
operator, no matter what the arguments, suggests psychologically that
we have a uniform rule for computing it. Certainly, if we use it
in practice, we must have such a rule. However, for the argument
presented here, the operator can be considered generally as denoting
the association of the correct system specification, whatever that is,
to the arguments given. As such, it must exist - here is a
sketch of the argument. Such a relation must exist by general
logical principles. But there may be two or more system specifications
corresponding to a given specification of a collection of parts.
(Note that the axioms of mereology entail that there is only one
system corresponding to a given collection of parts - namely the
mereological sum.) Now use the Axiom of Choice to obtain
an operator from this relation.
Back
Choosing option c does not mean that I assert that the
decomposition of a system into three components,
S = hardware_S ++ software_S ++ specification_S
is either complete, or even the most
desirable general decomposition.
I leave it open that may be other components of a system which it is
necessary to consider when the system fails.
Back
Even though, in everyday software development,
the formal-spec is often not very formal.
Back
More basically, they fail to state
that the bicycle should be rideable.
Back
This is
often considered as an example of a non-logically true but still
a priori assertion in mathematics. Concerning descriptions
of behavior, virtually any such description which which rests upon
the (classical) mathematics of real numbers will fail if the existence
of an infinite set is denied. However, see the works of Mycielski
(op. cit.) for some ways around this.
Back
A safety criterion, after Lamport
(Lam77),
is a condition that ``nothing
bad will happen''. A liveness criterion, by contrast, is a criterion that
``something good will eventually happen''. There are formal characterisations
of both of these criteria, along with theorems concerning their
relation
(AS87).
Back
This formulation is in the
Temporal Logic of Actions (TLA) of Lamport
(Lam94).
Back
However, Quine would repudiate
talk of the `meaning' of an individual concept
or word except as an allegory.
Back
Note that people do this in light conversation frequently, when they
set more store by `not being wrong' than by how things really are.
Our suggestion is not philosophically as trivial as this.
Back
e.g. But
we retain a wide latitude of choice as to what statements of the system to
preserve and what ones to revise; any one of many revisions will be sufficient
to unmake the particular implication which brought the system to grief. Our
statements about external reality face the tribunal of sense experience not
individually but as a corporate body.
(Q52).
See also
(Q61).
Back
The reader may note that I am committed to calling Faulty
a system which experiences a sequence of
potentially unrelated transient failures, one after the other for ever.
Back
One other possibility is that no-one could
have thought of a simple counterexample. I find this implausible. I think
no-one bothered to look.
Back
ladkin@rvs.uni-bielefeld.de