Program verification is one technique used in analysis of critical software and other parts of computer systems. Smith wants to draw a drastic conclusion about verification.
He bases this conclusion on a conception of how computer systems function, based on how they are designed and built:
Smith thinks that basing a system design on a model precludes the possibility of rigorous mathematical verification, because mathematical verification says nothing about the relationship between the model and the `real world'.
So Smith believes that computer systems must be designed based on a model of the problem, that verification can only determine the logical relationship between a system and the model on which it's based, that we have no theory of the relationship between such a model and the `real world' (except maybe for that part of the `real world' which is mathematical, an exception he may allow - see below), alternatively that a model is an abstraction which must `do violence' to its subject matter, and that therefore verification cannot assure that systems function as intended.
Many computer scientists as well as others find these kinds of considerations and conclusions quite reasonable. I want to doubt both the validity of Smith's main conclusion and the accuracy of many of his arguments. Although many computer systems are designed based on models, not all of them are. Abstraction is talking about some aspects of what you see before you, and omitting others, and far from `doing violence' to the subject matter, may in some circumstances be logically quite rigorous. And whether the system can rigorously be proven to fulfil its requirements depends on the logical form of those requirements. Some kinds of systems can be proven to fulfil their requirements. The interesting question is: which kinds?
Many computer scientists believe that when they are describing and specifying a computer system, they are `building a model'. This has always seemed to me an odd use of language. Those who write specifications in logic are writing sentences in a language that purport to describe a feature of the world, involving the computer system they intend to build. They are writing formal sentences (or, rather, predicates with one free variable). Why would this be called `model-building'? The answer lies in the meaning given to those sentences. Behind the formal sentences may lie a formal semantics, which is a specific mathematical construction. A formal specification in logic thus would assert something rigorously about a specific mathematical construction. This mathematical construction is a form of abstraction, and it's tempting to call it a `model'.
For example, Georg Hendrik von Wright analyses the logic of causality in (6). He assumes a version of logical atomism in which there are a finite number of logically independent propositions, each of which may be either true or false independently of the others. A state of the world is completely described by a truth-value assignment to all of the propositions - one says for each one whether it is true or false. Thus there are (2 exp n) states. He then considers that as time progresses states follow each other in sequence. For any time, there is a next state. Thus for m time steps there are 2 exp (m.n) logically possible histories. However, not all of these 2 exp (m.n) logically-possible histories may be possible histories:
Von Wright simplifies as he abstracts, and these simplifications are based on assumptions which may be questioned. Smith might say he's `doing violence' to his subject matter. But sometimes mathematical abstraction may avoid questionable assumptions. Penelope Maddy would look at a bowl of five oranges and perceive directly a set corresponding to the natural number 5 (as described in (9)). For a realist philosopher of mathematics such as Maddy, mathematical abstraction at this level is a matter of paying attention to some features of the world and not others. (Of course, a much extended story has to be told for the development of such concepts as higher orders of infinity.) Others may consider the number 5 in a different sort of way, maybe as a denizen of some `abstract world', and imagine that somehow we `map' scenes with oranges in them into the set of natural numbers so that our bowl becomes associated with the number 5.
Compare this with building a model of an aircraft for testing in a wind tunnel. The model is a facsimile, something similar to but distinct from the real thing, which shares some properties, but not necessarily all, with the object or situation it is supposed to imitate. A tenth-scale facsimile of an aircraft is put into a wind tunnel to see how it behaves. The airflow pattern over the scale model is known to share its most significant properties with the airflow over the same shape but ten times the size. But the facsimile doesn't share all properties. Its ratio of size to weight could be different. But that is known not to affect properties of airflow. The art of developing facsimiles for use in computer systems is subtle. Smith uses the example of a patient drug-regimen controller in a hospital. Such a system must be based on a model, a facsimile, of the patient's physical responses so that it may regulate release of the drugs into the bloodstream. The facsimile must be simple, to ease implementation and to make it easier to discover errors in the system, but yet must not be so simple that it avoids significant features of the problem.
So we have a real airplane with a facsimile which shares some but not all of its properties with the `real thing'. The situation looks superficially similar with the bowl of oranges and the number 5. For example, the number 5 doesn't have five orange parts (if you don't like the von Neumann definition of numbers, it doesn't even have five parts). Since the number 5 has some but not all properties of the bowl of five oranges, and vice versa, they must be different sorts of things. Thus there is an analogy with the airplane and its model, which also share some but not all properties. One could be tempted to conclude that the relation of airplane to facsimile is as the bowl of oranges to the number 5, and thus use the same language when talking about both.
Note, however, a crucial difference. For a Maddian realist, the number 5 inheres in the bowl of oranges - it can even be regarded as a part of the bowl in a strict technical sense of the word part (10) - whereas the airplane facsimile could in no way be considered a part of the real airplane. It's a completely distinct entity with no part in common. It matters not so much here whether Maddian realism may be sustained. One may be far from the Maddian crowd and still grant that it makes sense to ask whether the number 5 is part of the bowl of oranges, whereas it makes no sense at all to ask that of the airplane and its facsimile. The relation of airplane to facsimile is not very much like the relation of number to bowl of oranges, whichever philosophy of mathematics one accepts.
Compare now with the situation von Wright constructs. It could be that his simplifications are true abstractions of the Maddian sort, that he's focusing on features of the world which are really there. His simplifications would be literally true, while not describing the whole caboodle. Or it could be that his simplifications are just approximate, that the state of the world that he cares about can't be described by holding some finite number of propositions to be true or false, but it sort of can, just as Euclidean geometry doesn't literally describe the geometry of space, but it sort of does (so it seems to us at the moment). A bit like facsimile airplanes, in fact. But notice that if we perform arithmetic to three significant digits and only consider surfaces of one square meter at one time, Euclidean geometry exactly describes the world. The relativistic corrections disappear - they simply have no bearing on the results when we stay in the realm of thousandth parts. Maybe von Wright could yet perform analogous operations under which his `questionable assumptions' simply become accurate descriptions of a certain situation, and thus his logic becomes a means of inference amongst literally true statements. But this speculation about the logic of abstraction strays from our main topic.
The main point is that extracting the mathematical features of something is not necessarily like making facsimile airplanes. Sometimes it may be, but sometimes it ain't. Nevertheless, we could describe all of this as `building a model' if we liked. In this sense Smith's explanation of system construction is uninterestingly true. The interesting question is what metaphysical status our model-building has. If we are making airplane facsimiles, or simulating the drug-absorption behavior of a human being for a medical system, then it may be appropriate to doubt with Smith that we have exact theories of the relation between our model and the world. But when we are primarily interested in mathematical properties, if we were Maddian realists those mathematical properties might very well inhere in the real-world subject. Then, as we perform mathematical calculations we are making logical inferences as to how the world really is, in respect of its numerical features at least. This form of abstraction would not be `doing violence' to its subject matter - it is perception and inference, and therefore exact knowledge about the world as it is. As Smith admits (below), we would have a `relatively unproblematic' understanding.
The contrast becomes starker when we consider a system made to perform pure mathematical calculations. Its `subject matter' is then mathematics itself - we no longer need to consider the relationship of the mathematics to the bowl of oranges. We may write formal sentences which can be proven to describe correctly the operations performed by a computer chip when doing certain mathematical calculations and which can be proven to perform those calculations as required by the mathematics, all subject to the supposition that the physical chip hardware behaves precisely as specified (11).
So much for literal readings. He claims nevertheless that we should all be able to understand his argument (and presumably agree with it):
He delineates four general issues. First, that only simple programs can yet be proven correct, and current technology may not `scale up' to systems with multiple millions of lines of subtle code. Second, that systems which require substantial human interaction are not favored targets for formal verification, since `not much can be `proven', let alone specified formally, about actual human behavior'. Third, `complex computer systems must work at many different levels. It follows they can fail at many different levels, too.' Fourth, he asks, what does `correctness' mean?
Smith's first point is a relative one. Program verification technology now works on fairly complex algorithms and `programs' (whatever you think `programs' may be). He's correct to note the problems of scale; however, the verification technology is constantly pushing the boundaries back and I am not aware of any good argument giving inherent reasons why `really big' systems should not yield to future theorem proving techniques. Arguments from complexity, for example, assume fundamentally similar technology working on statistically similar systems. It seems appropriate to remain agnostic on problems of scale.
His second point could be historically correct if you weren't to regard, say, aircraft as `systems which require substantial human interaction' (fly-by-wire control systems have been one of the major applications of verification since the 1970's). But that would be simply comic. I dispute his further claim that not much can be specified formally about human behavior. For example, portions of the Flight Crew Operating Manual for the Airbus A320 aircraft have been specified formally (12). But of course it is harder to ensure that human behavior actually fulfils its specification (for example, the Habsheim accident in (5)). Of course this is old hat to aircraft accident investigators. The appropriate form of specification of a system involving human interaction is a conditional: if the human operator fulfils hisher specification then the system and its environment satisfies such-and-such predicates. If the specification of the human role is rigorous, there is thus no reason to hold that the specification of a system involving human interaction need be any less rigorous than one involving none. It may be trickier to guarantee that the antecedent of the conditional is always true - but that's not a matter concerning logical rigor.
Elaborating his third point, Smith notes that
Smith's fourth point, the question of what correct means, is fundamental. There is an answer for two of the three parts of a system, namely, middleware and hardware. Correctness with regard to middleware is a binary relation. A piece of middleware X is correct precisely in case X fulfils its specification Y. Y may belong to middleware or may be the requirements specification. In middleware, `fulfils' means `logically implies'. X is expressed in a logical language and this statement may be proven logically to imply the statement Y, also expressed in a logical language. (Precise specifications in other formal languages can be translated into the language of formal logic, so this is not a strong restriction. In particular, I note that procedural programs, which belong to middleware, can be regarded equally as specifications of behavior and that there are equivalent specifications which are written in a logical language as I require. Annotating procedural programs with pre- and postconditions is one way for serial programs. For concurrent programs there are more possibilities.) Middleware is the part in which program verifiers work and at which program verification is directed. Hardware, as both Smith and I agree, fails in just the same way that any other piece of engineering hardware may fail. Such failures are relatively well understood, and it's not generally thought that they submit to mathematical proof, although mathematical methods and reasoning are used to study and predict such failures. Requirements are more tricky. They cannot be `verified' in anything like the same sense as middleware, and they don't `fail' in anything like the same way as raw hardware. When building requirements specifications, the intentions of the users have to be measured and some match has to be attempted between user intention and formal statement of requirement. I discussed this in Section Faults in Systems of (13), and in (14) and will not repeat the arguments here. Smith rightly focuses on requirements as the potential weak point between designer/user intention and fulfilment of that intent.
In a later section, Correctness and Relative Consistency, Smith considers general questions concerning the relation between requirements specification and middleware, which he calls specification and program. He suggests that a specification is essentially declarative, describing what proper behavior would be, and the program is procedural, describing how the behavior is to be achieved. I doubt whether this distinction can be sustained. One way of specifying what behaviors are allowed is to provide a step-by-step recipe for generating them, namely a procedure. This happens in number theory for example, where it is known as mathematical induction. Similarly, there are many programming languages which are declarative, for which built-in search mechanisms of which the programmer needs to know little will search out a set of values fulfilling the program. So specifications may be procedural and programs declarative. Furthermore, most programs, even those in Fortran, can be considered a specification of output, so it's not always obvious what's specification and what's program. Smith uses his distinction to make the pertinent point that detecting faults in requirements, even those of mathematical programs, can be hard. This point may be made independently of any potential distinction between declarative and procedural specification. See the op. cit., (13) and (14).
We may logically insulate a system specification from hardware failure by a simple logical device. Suppose that when the hardware is working correctly, the system S must fulfil condition P, and that the middleware has been formally proven to satisfy P. Then the `insulated' specification of S is Q: either the hardware is failed or P. This is more than a cheap trick. Suppose that S satisfies Q, but that P is not fulfilled. Then it follows that one calls a hardware engineer to change a board or the like. It would do no good at all to take the problem to the programming team and ask them to look at it.
So, middleware may be logically verified and a system specification may accomodate hardware failure. Requirements are the logical weak point. There is no similar logical device which we may apply to insulate from requirements `failure' (see (14) for a discussion of what this means). However, it is by no means the case that all systems must lack correctness proofs. There are cases in which the requirements may be rigorously ascertained to be correct: for example, the trivial case in which the requirement is a simple logically true predicate, or the case in which the requirement is a purely mathematical function such as in the small whole number gcd algorithm discussed in (14). There are, of course, still the cases such as Smith mentions - the drug protocol dispenser, the missile early-warning system - in which the question of match between user intentions and system function may not be precisely ascertained. But this is not unique to computer systems - in fact none of the questions of requirements have much to do with whether the system is computational or not. It's just that requirements become much harder accurately to ascertain with computational systems, and user adjustments to system inadequacy are also much harder than they are with, say, bridges.
To summarise, middleware may in principle be proven correct (feasibility aside) for any computational system; whether requirements may be proven correct or not depends on what kind of requirements there are; and hardware may not be proven correct, since proofs aren't the kinds of things that apply to bits and pieces. Some complete systems (for example, those with logically true requirements) can be proven correct, hardware and all; others (with necessarily purely mathematical conditions as requirements) can be proven correct down to the hardware, and may thus be logically insulated from hardware failure by the device of disjoining the phrase `or the hardware is faulty' to the requirements.
First, I'll reserve the word `model' for the strong sense which Smith needs, the facsimile or von-Wrightian approximation. One way of describing the construction of a computer system is as follows. One of the first steps is to formulate the requirements. Requirements are a predicate in a language with a precise semantics, which is either a logical language or can be translated exactly into a logical language, which state how the system is supposed to behave. Then one constructs the middleware and then assembles the hardware and has a system. Either the system fulfils its requirements predicate or it does not. Where is the Smithian model? In order for Smith's claim to be valid, we need to find one.
A Smithian model is a facsimile or von-Wrightian approximation of whatever it is supposed to model. It would be ludicrous after Frege to claim that a predicate is a facsimile of anything. Maybe one could claim that the standard semantics for logical predicates is given by (logical) model theory, and the relevant Smithian model is some logical model-theoretic model which fulfils the predicate. This would be hard to sustain. Very few predicates have unique logical models (up to isomorphism), and those that describe computer systems hardly ever do, so the question would arise, which of these models was meant? One could say that the specification was then ambiguous, and try to refine it such that it were categorical (the technical term for only having one model-theoretic model up to isomorphism), but this is not done for good reason. When one specifies the communication in a banking transaction system, one cannot possibly predict which customers will come in which order and make which transactions for which sums, so one wants it to work for many possible sequences of such actions. Thus categoricity of specifications is highly undesirable. This notion of model cannot be what is meant.
But maybe we can't avoid using logical model theory. So Smith:
Smith's example of the drug regimen, and my example of the airplane scale-model, shows that Smithian-modelling can sometimes be helpful, even necessary, in building a particular system. One needs to have some idea, not necessarily a logically exact idea, of how a patient reacts to drugs, or of how the air flows over a shape, in order to figure out how to build such a system. But the claim we are considering is not that we sometimes use Smithian modelling but that we always do. Smith bases his argument that we cannot verify computer systems on his claim about models, and he could not draw the universal conclusion he does about verification unless all systems are based on Smithian modelling.
Berkeley's solution was straightforward. He believed perception was direct, i.e., we have direct access to the objects of perception, and since Locke had argued that these objects were ideas in our minds, he took it that that's what objects are. Mental constructions. But how can we then talk about reality, if everything's just a mental construction (rudely put, a figment of our imagination)? In particular, how can an object persist if noone is perceiving it (the `tree in the quad')? Berkeley, the consummate salesman, didn't lose an opportunity to promote the company product. Everything is perceived all the time in the mind of God. Reality is the sum total of what's in God's immediate perception, and each of ours is a subset of that. That explains not only persistent objects, but also how perception can be mistaken, and how there is a reality to be mistaken about.
The mechanics of Berkeley's solution doesn't appeal to modern philosophical aesthetes, who prefer to explain affairs without invoking God if possible. But his main point: that perception is direct, and the reasons for it: that intermediating perception raises more puzzles than it answers, are the foundation of most modern views such as that of Austin. (It is widely recognised now that Locke's arguments are more involved than my short sketch of them shows, and some have argued that Berkeley erected a straw man. Nevertheless the Lockish argument that I have presented is regarded to have been canonically refuted by Berkeley.)
How does this apply to models? Apply Berkeley's criticism of Locke now, not just to `things', but to the entire relational structure of the world - relations between things, functions of things, relations of relations, and so forth. (If you feel nervous with this, pretend you're a logical atomist, or even a logical-atomist.) Smith's claim is that to build any computer program, we must first devise a facsimile of some feature of the world we wish the program to reckon with. Why? What would compel us to do this? If we had direct epistomological access to the world-feature, we could simply directly specify the relevant characteristics of this feature in the specification. So if we cannot do this, if we must use a facsimile, it must be because we have no such access to the world-as-is.
But then if we have no access to the world-as-is, how then can we determine the relation of facsimile to world-feature (as I shall call it) that we are facsimulating? Since we have no access to the world-feature directly, we cannot say this aspect of the world-feature corresponds to this aspect of the facsimile, as we can of the airplane and its model. If we could, the facsimile would be a heuristic and we would have no need in principle (just occasionally in practice) of the facsimile to describe the world-feature. But Smith cannot allow us to avoid passing via the facsimile. His argument against verification depends on the properties of facsimilitude.
The role of Smithian models corresponds to the role of Lockian `ideas'. We can build programs which calculate according to the facsimile (we can perceive `ideas' directly) but we cannot tell anything about the world-feature (we have no access to the objects that cause the `idea'). Berkeley's criticism applies. We cannot determine what correspondence holds between the world-feature and the facsimile, because to do that we would have to have access to the world-feature. Since we don't, we have thus no means of assessing whether the facsimile resembles the world feature in any useful way at all. But without being able to determine some correspondence, the facsimile venture is hopeless. The correspondence is what makes the facsimile a facsimile: the airplane model is a facsimile of an airplane and not a facsimile of a turnip. Thus for the facsimile approach to work, we must be able somehow to judge the correpondence; this entails that we have some epistomological access directly to the world-feature itself; and this entails that we cannot rule out that we could describe the world-feature directly in the specification of the program and avoid the facsimile. Not in every case maybe, but in some.
Later on, in a section on Computers and Models, Smith agrees with the main point that I have argued his Smithian-model argument commits him to:
If we eventually do have a general theory of the model-world relationship, then we will be even be able to eliminate any need for models altogether - simply by composing the relation between specification and model with the relationship between model and real world. How likely that is, I wouldn't like to speculate. In the case of mathematics, it seems that Smith agrees with me that Smithian models are not really needed.
Granted that some systems don't need facsimiles, the interesting question becomes: which sorts of computer system do not need facsimiles? That question I have begun to answer in (14). A partial answer is: some programs which perform calculations in the realm of mathematics, as well as some other boring systems whose specifications contain only safety properties. Given the logical devices above by which we may obtain rigorous specifications also for systems including unreliable hardware and for systems including human interaction, Smith's thumb may well have popped out of the dyke and Smithian models washed ever further away.
If one is thinking of Smithian modelling as the relation of the tenth-scale airplane to the full airplane, or the ideal patient model to the real live person, then we have noted that the Smithian model actually has the properties that are being ignored, but the values of these properties may be different in the facsimile from the world-feature being modelled. For instance, the lengths of parts in the model airplane are one tenth as long as those in the real thing. And the material of which they are made is different. But abstraction means that one simply says nothing about such divergent values. It does not follow that one loses rigor by thus remaining silent. If I say that my jacket is red, the truth or falsity of this is unaffected by whether I choose also to tell what size my jacket is, how it's buttoned, or where the pockets are.
The discussion above following Maddy (9) is closely relevant. A Maddian argument holds that the von Neumann set 5 (= the set of all numbers 1-4) is actually there and observable in a basket of 5 eggs. This is not `doing violence to [the] subject matter'. According to a Maddian it is, on the contrary, part of the subject matter itself. And the logical relations of this set 5 to those of other sets are exactly as set theory says they are. If Maddy is right, I could specify `some object x which holds five objects' - trivial to write this predicate in first-order logic with a bit of mereology - and reason with this specification in set theory. And I would be reasoning about actual observable properties of the basket with five eggs. How is what I do `doing violence to its subject matter'? Smith provides no argument for his claim that `abstraction does violence' other than simply asserting it. It seems it needs at least some.
But there are likewise some programs, indeed systems, which do not use Smithian models essentially, which one may prove correct. These programs do not require facsimiles to be constructed to aid in their specification: for example, some mathematical calculations designed for a particular processor. A specification of such a system refers directly to, and can be checked against, knowable mathematical features of the world (pick your favorite alternative phrasing if you're not sympathetic to Maddian realism). To argue against this conclusion would require, for example, a refutation of logical atomism which most practising computer scientists would not grant as easily as Smith supposed. And also a refutation of Maddian realism.
We have seen that an argument for pervasive Smithian modelling yields to Berkelian arguments; and that his claim that abstraction `does violence to' its subject matter is prima facie implausible and lacks an argument; and these are the two arguments Smith provides on to base his claim for the hopelessness of verification. Smith's pertinent comments on how hard it is to get requirements right can be derived from considerations about requirements specification, middleware and hardware without yielding any conclusion about models. Thus Smith's arguments fall well short of his target, and we may also remain sceptical of his broad conclusion that verification can never ever succeed in proving that a system fulfils its intentions. It can, sometimes.
(1): Brian Cantwell Smith, Limits of Correctness in
Computers, Report CSLI-85-36, Center for the Study of Language and
Information, Stanford University, California, October 1985.
Also in (2), 275-293.
Back
(2): Timothy R. Colburn, James H. Fetzer and Terry L. Rankin,
eds., Program Verification,
Kluwer Academic Publishers, 1993.
(3): Peter G. Neumann, ed.,
Risks to the Public in the Use
of Computer and Related Systems.
Back
(4): John Rushby,
Anomalies in
Digital Flight Control Systems (DFCS).
Back
(5): Peter Ladkin and others,
Incidents and Accidents with Fly-By-Wire Commercial
Airplanes.
Back
(6): G. H. von Wright, On the Logic and Epistomology of
the Causal Relation, in (7), also reprinted in (8).
Back
(7): P. Suppes et al., Logic, Methodology and
Philosophy of Science, IV, North-Holland, 1973.
(8): Ernest Sosa and Michael Tooley, eds., Causation,
Oxford Readings in Philosophy Series, Oxford University Press, 1993.
(9): Penelope Maddy, Realism in Mathematics,
Oxford University Press, 1990.
Back
(10): Peter Simons
Parts : A Study in Ontology, Oxford University Press, 1987.
Back
(11): Robert S. Boyer and Yu Yuan,
Automated Correctness Proofs of Machine Code Programs for a
Commercial Microprocessor, in Deepak Kapur, ed.,
Automated Deduction - CADE11, Lecture Notes in Computer Science 607,
pp416-430, Springer-Verlag 1992.
Back
(12): Peter B. Ladkin,
Analysis of a Technical Description of the
Airbus A320 Braking System, High Integrity Systems 1(4), 331-350, 1995.
Back
(13): Peter Ladkin,
The X-31 and A320 Warsaw Crashes: Whodunnit?.
Back
(14): Peter Ladkin,
Correctness in System Engineering.
Back