This is an archived page and is no longer updated.
Current content of Prof. Ladkin's research group is available at

Logical Form as a Binary Relation

Peter B. Ladkin

Research Report RVS-Occ-97-03

Abstract: I consider the notion of logical form, and argue that there are considerable difficulties with defining a canonical logical form of sentences and propositions: for given A, finding a unique B such that A has the logical form of B. The purpose of logical form is unproblematically served by allowing for a given A varied B such that A has the logical form of B. I give a criterion for the relation of having the logical form of in terms of the notions of syntactic transformation and inferential equivalence.


Modelling, the Meaning of Propositions, and Logical Form

Back to Synopsis

Modelling has played its role not only in engineering but also in the philosophy of logic and language. Wittgenstein's influential picture theory of meaning (Wit22) ascribed the meaning of elementary, `atomic' propositions to the `picture' they drew of a state of affairs, in much the same way as an arrangement of blocks on a table may represent an arrangement of cars on the road during a traffic accident (see the discussion in (Ric96)). The blocks represent the cars in the sense that the spatial relations amongst the blocks correspond directly and precisely to the spatial relations of the cars. Distances and measurements are transformed, but conformedly.

For this `modelling' to work, some structure must be shared between representation and actual situation. For the blocks-representation, the shared structure would be the spatial relations and, maybe, uniformly-transformed distance measurements. For propositions, the `picture theory' holds that this structure obtains in common with the structure of `states of affairs' (a more inclusive philosophical term for `real world', which allows not only objects in the `real world', but relations between objects and so forth). This common shared structure was identified by Wittgenstein as the logical form. That is, what atomic propositions share with the states of affairs they picture is their logical form. Wittgenstein famously held that this could not be stated in a language, only `shown'. I shall substantiate this view below.

A lot has happened in philosophical logic and the theory of truth since the Tractatus. Why consider the Tractatus version? I have two reasons. First, in my experience, engineers seem to hold a version of the picture theory as being very near the truth. One talks about states of affairs, asserts they are one way or the other, and the `facts' `show' which sentences are true and which not. Furthermore, some descriptions just picture the way that things are, for example, the ubiquitous and sometimes iniquitous `boxes-and-arrows' diagrams in software engineering, and then one can combine these simple descriptions with logical connectives, or their pictorial equivalent, if one wants, so it seems as if some `picture theory' is needed: at least a meaning-theory of pictures if not a picture theory of meaning. A meaning-theory of pictures follows from the notions of abstraction and modelling in (Lad97.4), which are in turn based on the notion of syntactic transformation used in the analysis of logical form presented here.

Making a picture and building a model are also very similar activities, both concerned with representing features of things in an artifactual way. However, most engineers do not think through their worldview to the depth that Wittgenstein and his logical successors have done. It is therefore worth considering derivations of the Tractatus view as one possible thorough exposition of a similar worldview to that of many engineers (Footnote 1). I consider the notions of abstraction and modelling in (Lad97.4) and base those notions, as here logical form, on binary relations which relate two objects by an operation of syntactic transformation (to be defined below).

Some Preliminaries - Sentences, Assertions, Propositions, Statements

Back to Synopsis

My goal is to show that the binary relation has the logical form of is well founded, but that it is not necessarily the case that to every sentence/proposition/statement (pick your poison) A there corresponds a canonical B such that A has the logical form of B.

There are some necessary logical distinctions to be made before we can discuss the main thesis, with apologies to readers to whom this theme is well-known. It is covered for example in (Wol89, Chapter 2), with which my use of terms is consistent, and which in turn follows Strawson. Those to whom these distinctions, and their difficulties, are known may skip this section. Readers not familiar with these concerns might find a survey such as (Wol89) useful.

I shall take a (written) sentence to be a sequence of symbols that can be used to make a meaningful assertion in some natural, or formal, language. `All bats are bell-bottomed' is a sentence of English. I will write it again: `all bats are bell-bottomed'. These two spatially-distinct symbol sequences form two different occurrences, almost adjacent to each other, of the same sentence. Let's rephrase this in technical parlance. These two spatially-distinct symbol sequences form two different tokens of the same sentence type. Two tokens which are sufficiently orthographically similar are also taken to form the same type sentence (Footnote 2).

I take it that the idea of a sentence type is not so problematic. I will generally speak of a sentence without bothering to distinguish between type and token, and assume a coherent interpretation in context. Sentences are used to make statements. We can consider (i) `John kicked the ball' and (ii) `the ball was kicked by John' to make the same statement, even though they are different sentences (providing that `John' and `the ball' successfully refer to the same objects in both sentences). Also, if John happens to be the little guy in the blue shorts, then (iii) `the little guy in the blue shorts kicked the ball' makes the same statement, namely that a particular person kicked the ball, and this person is identified in two different ways in the two different sentences (i) and (iii).

Suppose that we attempt to determine the truth value of (i), (ii) and (iii) by considering what the words and phrases mean. We know what it means to be a little guy in blue shorts (that's a description which itself has structure), and we know John. John happens, as a matter of fact, to be the little guy in blue shorts. But if John weren't to be the little guy in the blue shorts, and the little guy kicked the ball but John didn't, then (iii) would be true and (i) false. Sentence (ii), saying `the same' as (i), would also be false. So the fact that (i), (ii) and (iii) make the same statement seems to be contingent on what's going on in the world.

So part of the explanation of how the sentences get their truth values relies on their structure. Something is predicated of John, aka the little guy in blue shorts, and this predication is successful: he's actually kicking the ball. And that the phrase `little guy in blue shorts' refers to John in this circumstance has also to do with predication: John is little and he's wearing blue shorts and he's the only one in the neighborhood satisfying this description. To figure out it's John that's referred to, we have to analyse and use the structure of the phrase. But since it need not have been John that was referred to, analysing the phrase alone is not sufficient to determine reference. Contingent facts must enter into it as well.

So it seems we need to distinguish (i) and (ii) from (iii), but not (i) from (ii). I shall say that a sentence can make a proposition, and two sentences make the same proposition if under all circumstances they would take the same truth value. Thus (i) and (ii) make the same proposition, and both make a different proposition from (iii) because under different circumstances they would take distinct truth values from (iii).

I shall say that, under the present circumstances, (i), (ii) and (iii) make the same statement, since John is indeed the little guy in blue shorts, and of that person it is being said that he kicked the ball. Thus two sentences make the same proposition if they make the same statement under all circumstances, and make different propositions if there are some circumstances under which they would make different statements.

It's tempting to reify, and say that there are things called propositions and statements, with which sentences are in relation (the latter are used to make the former). However, I don't have to so I won't. I shall consider that making the same/a different proposition and making the same/a different statement are binary relations on sentence types.

A sentence that may used assertively is often called a declarative sentence. Asserting is what I do when I make a statement intending to convey its truth. An assertion is the act of asserting. Assertions may be contrasted with quotations, queries, commands, ejaculations, expressions of relief, promises, excuses, and other sort of speech acts which involve writing sentences.

There are difficulties in dealing with the notions of proposition and statement, as we shall consider in Logical Form and Puzzles of Reference. For example, a sentence may contain a non-referring singular term (a singular term is a phrase that purports to pick out an individual, and it may fail - `the king of France'); or a property may be predicated of a sort of object to which it cannot apply (`John is divisible by 3 and 5'; the aircraft cried in shame'), a so-called `category mistake'.

How to Show Logical Form

Back to Synopsis

The `picture theory of meaning' considers logical form as that form which a true atomic proposition-as-picture shares with the state of affairs with which it corresponds. But that's not the only idea of logical form that there is. Let's start with a definition.

logical form
The logical form of a sentence is the structure, shareable with other sentences, responsible for its power in inferences. That is, its logical form determines the way in which it can be validly deduced from other sentences, and the way other sentences can be validly deduced from sets of premises that include it. Obviously there is something common to the argument `All men are mortal, Socrates is a man, so Socrates is mortal', and `All horses bite, Eclipse is a horse, so Eclipse bites'. This common form may be revealed by abstracting away from the different subject matter, and seeing each argument as of the form `All Fs and G; a is F, so a is G'. The symbols of symbolic logic simply represent such common forms and the methods of combining elements to make up sentences. It is frequently controversial to what extent reduction to simple forms is possible, and how much hidden structure it is fruitful to look for, in order to reveal similar logical forms under the surface diversities of ordinary language.

[Note that this definition incorporates the notion of `abstracting away': determining the logical form, under this explanation, requires abstraction. But what is in fact happening in this example is explained below under the notion of syntactic transformation, as is the notion of abstraction in (Lad97.4).]

Prima facie, there are two notions here. First, Wittgenstein's thesis that logical form is shared by states of affairs and certain true propositions; second, logical form as part of a clarification about what correct inference involving the sentence consists in. A logically-valid inference consists in a proposition or sentence, called the conclusion of the inference, and a set of propositions or sentences, called the hypotheses of the inference, such that if all hypotheses were to be true, the conclusion must also be true (the word premiss can also be used instead of hypothesis, and valid is often used instead of logically valid). The connection between the two notions of logical form could very roughly be given by the following sort of argument. For Wittgenstein, the tautologies (for him, the logical truths) have no content, are properly meaningless - they say nothing (by the way, I don't agree, but nothing of consequence for this essay rests on it). Let us suppose that the hypotheses of an inference are all true. Then they would share logical form with states of affairs. The conclusion of the inference would be just another aspect of the cumulative state of affairs determined by the hypotheses, containing no constituents that weren't already in the various states of affairs corresponding to the hypotheses. The tautologies, being the conclusions of arguments with no hypotheses, therefore correspond to every possible state of affairs. Under the assumption that to make a meaningful assertion is to discriminate some states of affairs from others, tautologies would thereby not be meaningful. One could consider this argument in detail, maybe to try to explain what it would be to be a part of a state of affairs without reverting to an explanation in terms of valid inferences, or maybe to defend the view that meaningful assertions must discriminate some states of affairs from others, but it is peripheral to my present concerns, so I shall leave this sketch as it is.

I wish to concentrate on what could be meant by `only shown, not stated'. The assertions `all men are mortal' and `all horses are four-legged' have a similar logical form, which we may express by `all Fs are G' (Footnote 3). Let's call this a pseudo-sentence. This pseudo-sentence contains two words, F and G, which are not meaningful. They are place-holders for meaningful words, put there to indicate what is variable between the two sentences about men and horses, and thereby to show what is the same, namely the rest of the structure. We may take it that this exhibited structure corresponds to the similar role that the two sentences play in valid inference. So it seems we may use non-words along with words in a pseudo-sentence to show clearly the logical form of real sentences.

This logical form can be shown as we did, but it is not at all clear it could be stated in the natural language: the symbols F and G are `place-holders', not meaningful components of the pseudo-sentence, and by virtue of their function as place-holders, they contribute nothing to explaining the meaning of the pseudo-sentence. However, if we simply put nothing there instead of F and G, we obtain the non-sentence `all are', and we can no longer tell where the omitted elements of the original sentence were placed: we could as well be indicating `Fs all are' or `All are Gs'. Furthermore, we would not be able to indicate the logically-relevant difference between All Fs are G and All Gs are G. Any of these four pseudo-sentences could be meant were we simply to omit the meaningful sentence elements that do not contribute to the logical form; and these four pseudo-sentences convert back into sentences that all play different roles in inference. Omission of non-contributing sentence elements thus does not lead us to the logical form, it leads to logical ambiguity. We therefore require something like place-holders and pseudo-sentences to indicate logical form.

Within reason, we can invent languages (Footnote 4). I can define a semi-formal language, which shall include words like `all', `are' `there exists', `and', `or', `not', as well as non-natural symbols such as predicate-symbols, name-symbols, and so on, which I shall use as `place-holders'. I define grammar rules to say what shall be acceptable expressions of the language. My pseudo-sentences have become real sentences in this new language.

I can now use a representative (or many representatives) of this language as a canonical way of expressing the form of sentences whose logical form it shares. Thus I could stipulate that `all Fs are G' is the canonical logical form of `all men are mortal', because the meanings of men and mortal do not enter into the role of the sentence in inference in the way that the quantifier `all' and the copula `are' do. The logical form of `all Fs are G' in my un-natural language is shared with that of the natural-language sentences, and furthermore is clearly indicated in the semi-formal language, through `placeholder' symbols.

However, there are other pseudo-sentences which share the same form, for example `all Hs are J'. Could we have stipulated that `all Hs are J' be the canonical logical form? Clearly so: the placeholders have no semantic import and it is only necessary that they be of the same syntactic type, and in this case also distinct from one another. So `all Fs are G' and `all Hs are J' share logical form, and they share it with `all men are mortal'.

This treats logical form as a relation between two sentences, that `share' that form; a functional sense of the logical form comes with the stipulation that some corresponding pseudo-sentence is `canonical'. `All Fs are G', `all Hs are J' and `all men are mortal' share logical form, and we may take one of the former as a canonical logical form of the latter. The reason why we might do this is that the pseudo-sentences show the form in a way that other operations on the natural-language sentence do not. Such a stipulated `canonical form' is not obviously uniquely determined: first we choose a pseudo-language and then from amongst sentences sharing the same structure we choose a representative.

So we have seen two features of attempting to define logical form. First, one cannot simply give the form by omitting sentential components that do not contribute to the form: we must use pseudo-sentences that mark contributory/non-contributory features through a marked difference in symbolism. Second, there may be no one unique pseudo-sentence that corresponds logical-form-wise with a natural-language sentence: one stipulates a canonical representative, if at all. These procedures are based upon the notion of two sentences, or a sentence and a pseudo-sentence, sharing logical form. When a form is shared, it may be shown by exhibiting a canonical example. But we have not found an object that has only the shared form and no other sentence elements: the pseudo-language makes essential use of place holders. This is one interpretation of the view of Wittgenstein, that logical form may be shown, not stated. This might seem a trivial sense in which to interpret Wittgenstein's dictum, were it not for the fact that there are difficulties with the notion of a canonical logical form of a sentence. These difficulties will substantiate the notion of logical form as a binary relation which is not functional (Footnote 5). We shall consider them below.

Logical Form and Puzzles of Reference

Back to Synopsis

I talk of a sentence `making a statement' to mean that it can be used to make an assertion in the current circumstances, and that this assertion is meaningful and either true or false. A sentence `makes a proposition' if there are possible circumstances in which it could be used to make a statement. These concepts are not necessarily clear, but will suffice to indicate the difficulties. The idea is that making a proposition is more closely connected with the meaning of a sentence, and that this meaning is partly explained by enumerating the circumstances under which the sentence makes true statements (Footnote 6).

Consider what proposition may be made by `the king of France is bald'. The sentence predicates baldness of an entity described as the king of France. The phrase `the king of France' is meaningful - we know what it is to be a king, and what France is, and indeed there used to be a king of France, so under different circumstances the phrase could be used to refer to a person. And in those circumstances, that person is asserted to be bald, which is an appropriate property to assert of persons (unlike, say whether they are divisible by 3). Thus the sentence makes a proposition. The problem is that there is no such entity now. The sentence would be false if the predicate doesn't hold of the entity, and true if it does. But since there is no such entity, what can we say about truth and falsity?

Russell, in his Theory of Descriptions (Aud95, entry on definite description) , held that the logical form should account for this phenomenon. He also did not distinguish between proposition and statement. He suggested that meaningful occurrence of the phrase `king of France' means that the sentence entails the conditions under which this phrase could successfully refer; therefore entails `there is one and only one object that satisfies the predicate `king of France'' and furthermore asserts of this object that it is bald. Thus the sentence makes the statement that `there is one and only one object that satisfies the predicate `king of France' and this object is bald'; and this statement is simply false, because the first conjunct is false.

On the other hand, philosophers such as Strawson have argued that although the sentence makes a proposition, that proposition does not make a statement, because a condition of making a statement is that the singular term `the king of France' successfully refers. If this is not the case, therefore, no statement is made, although a proposition is made. Failure to state rather than failure to propose is what is happening here, on this view. If no statement is made, one cannot say that the proposition is determinately true or determinately false, so this is known as a Truth-Value Gap Theory.

Containing a non-referring singular term is not the only way that a statement may fail. Suppose I were to assert that the sentence `the cat sat on the mat' sings and occasionally dances. Or that I am divisible by both 3 and 4. Sentences just aren't the sort of things can can sing and dance; people are. And people aren't the kinds of things that can be divisible by 3 and 4; numbers are. In both cases, the predicate is inappropriate. These sentences both make what Ryle called a `category mistake'. It may be questioned whether there is any proposition corresponding to a sentence containing a category mistake, since it's not clear that there are any states of affairs under which the sentence could be true.

So although sentences (along with sentence-types and sentence-tokens) and assertions are relatively unproblematic concepts, statements and propositions have their philosophical difficulties. However, all four terms are in common use and correspond to necessary distinctions, and I wish to use them in this essay, if possible without thereby incurring the metaphysical baggage that comes with them, or the linguistic convolutions that one must employ in order to stay type-correct, or dealing with the philosophical problems that using the terms might entail.

Russell's Theory of (Definite) Descriptions (ToD) thus transformed a sentence carrying a singular-term noun phrase, for example `the king of France is bald' into an assertion

(exists x)(King-of-France(x) /\ (forall y)(King-of-France(y) => y = x) /\ Is-bald(x))

which does not have the surface syntactical form of the original (Footnote 7). It is a context-sensitive transformation of the original syntax. Russell claimed that this transformation exhibited (part of) the logical form of a sentence with a singular term that was a description rather than a simple name. One could also hold this as an explanation for all singular-term referring, names included. On such a view, sentences are false if a name fails accurately to refer. On the Truth-Value Gap Theory (TVGT), such a sentence has the form


which is in subject-predicate form like its surface syntax. Such differences in form could have consequences for my explication of logical form, since I wish to base my argument on syntactic transformations of a simple sort, and prima facie it would be more convenient for me to accept the surface-syntax-conformant Truth-Value Gap theory.

However, consider the following sentence

Churchill used a mug to drink tea from on 31 March 1997

This appears to be subject to the same considerations as the `King of France' example. But there's an extra referring term, `bridge'. The sentence is normally taken to be equivalent to its passive

A mug was used by Churchill to drink tea from on 31 March 1997

which is normally taken to have existential form

(exists x)(x is a mug /\ x was used by Churchill to drink tea from on 31 March 1997

Of which mug is this predicate true? No mug. So surely the sentence is false, under the usual semantics of indefinite expressions (existential quantifiers)? This conclusion coincides with Russell's Theory, but not with the Truth-Value Gap theory. The argument holds whether the predicate is held to be meaningless or false. It matters only that it is not true of any mug. This poses a problem for the Truth-Value Gap theory, namely that one would expect that a sentence and its passive form normally express the same proposition, but yet this sentence fails of truth-value while its passive form seems to be false - suggesting that one sentence makes a proposition while the other does not. Extraordinary.

So accepting the Theory of Descriptions seems to entail context-sensitive, fairly complex syntactic transformations to obtain logical form (think of the transformation of sentences containing three or more singular terms), whereas the Truth-Value Gap theory seems to entail that in some circumstances a sentence may make no proposition while its passive transformation does so. Luckily, I can avoid deciding between the two theories, by considering that logical form (in one of its meanings) concerns the role that sentences play in inference, defining the notion of inferentially-equivalent sentences, and arguing that inferentially-equivalent sentences must share all logical form(s) in common. I shall need to consider some more examples.

Let us consider first two valid arguments in a first-order-logic-like pseudo-language, similar in form, in one of which occurs a singular term that does not refer, and the other of which contains in a similar position a singular term which does refer. `Churchill' is a singular term. Consider:

(forall X)(A mug was used by X to drink tea from on 31 March 197 => X is not dead) (Hyp 1)
A mug was used by Churchill to drink tea from on 31 March 1997 (Hyp 2)
Churchill is dead (Hyp 3)
A mug was used by X to drink tea from on 31 March 1997 => X is not dead
                            (Concl 1, from Hyp 1)
A mug was used by Churchill to drink tea from on 31 March 1997 => Churchill is not dead
                            (Concl 2, from Concl 1)
Churchill is dead /\ Churchill is not dead (Concl 3, from Hyp 3 and Concl 2)

Since Conclusion 3 is a contradiction, and contradictions cannot be true, this argument is unsound (Footnote 8). Since it is valid, one of its hypotheses must also be not be true. Hypotheses 1 and 3 appear to be true, so Hypothesis 2 is the most likely candidate for untruth. Consider now the equivalent argument with `PBL' substituted for `Churchill'. Hypothesis 2 of this new argument is true, and Hypothesis 1 appears to be. That leaves Hypothesis 3 as the one to be not true. This example shows me that the role of sentences in inference, concerning the validity and soundness of arguments, may at least sometimes be ascertained without attending to the question of successful reference of singular terms.

With this in mind, let us consider the `King of France' example again, with respect to its formal role in inference.

The King of France is bald
There is a unique object which is the King of France

I claim this inference is valid under either Theory, ToD or TVGT. An inference is logically valid just in case, if all the hypotheses were to be true, the conclusion would also be true (Footnote 9). If the hypothesis `the King of France is bald' were to be true, then on either theory the singular term would in these circumstances successfully refer. Hence it would be the case that there be a unique object fulfilling the description contained in the singular term, and the conclusion follows. Similar considerations show that

The King of France is bald
There is a unique object which is the King of France and this object is bald

is logically valid, as is also the argument

There is a unique object which is the King of France and this object is bald
The King of France is bald

I call two sentences A and B inferentially equivalent if both

A                 B
---    and    ---
B                 A

are logically valid. I call two sentences A and B minimally inferentially equivalent (mie for short) if they are inferentially equivalent and no proper subexpression of A is inferentially equivalent to B and vice versa. The point of the minimality criterion is to rule out extraneous stuff in one of the sentences: where P is a sentence and A and B are inferentially equivalent, A and B /\ ~(P /\ ~P) are inferentially equivalent but not minimally inferentially equivalent. Thus `There is a unique object which is the King of France and this object is bald' and `The King of France is bald' are inferentially equivalent, and also minimally inferentially equivalent. Thus, the ToD `logical form' of the sentence is minimally inferentially equivalent to the TVGT `logical form'.

Note that all arguments involving sentences which are either false or fail of truth value are unsound, whether they are valid or not. For example, the existential generalisation

The King of France is bald
Something is bald

is valid but unsound, because the hypothesis, whether it is false or fails of truth value, is in any case not true. It follows that ToD and TVGT count the same arguments sound.

Finally, it is easy to show that inferentially equivalent sentences have exactly the same logical consequences, and are the consequences of exactly the same other sentences. Thus exactly the same role is played in inference by two inferentially equivalent sentences. However, consider the two inferentially-equivalent sentences A and A /\ ~(P /\ ~P). The second contains a suffix of symbols which are normally regarded (except for the place-holder P) as being significant for logical form (unless it is held, for example, that all tautologies have the same logical form). If T has the logical form of any tautology, then A and A /\ T are inferentially equivalent, and if F has the logical form of any contradiction, then A and A \/ F are inferentially equivalent. It seems, then, that the `extra' material is considered to demonstrate difference in logical form (we shall consider this in the section on Logical Form via Syntactic Transformation, next). I conclude that it follows from the explanation of logical form as role-in-inference that any logical form exhibited by the one sentence is shared by any other that is minimally inferentially equivalent to it.

Logical Form via Syntactic Transformation

Back to Synopsis

The semi-formal language has particular syntactic categories: variables, constant symbols, terms, formulas of arity n, and so forth. Items of each category may standardly be divided into atomic and compound symbols. I say that a sentence A is syntactically transformable into sentence B if there exists an association of atomic symbols of A with compound symbols of the same category in B such that if these compound symbols are substituted for their atomic associates in A, sentence B results. Two sentences are syntactically equivalent if each is syntactically transformable into the other. Such a definition is applicable to any language with a categorial grammar (see (Aud95, entry on grammar), and Footnote 10). Most natural languages have extensive sublanguages whose syntax may be captured with a categorial grammar, so these relations can be defined also on these sublanguages.

My semi-formal language has atomic symbols - words - which are partly natural-language words, and partly place-holder special symbols. Observe that `all Fs are G' may be syntactically transformed into `all men are mortal' and vice versa; and similarly with these two sentences and `all Hs are J'. It follows that mutual syntactically transformable sentences in this pseudo-language must have place-holders in exactly the same places, and therefore must have the same logical form, in the sense of playing the same role in inferences.

Let us consider further. Wittgenstein had held in the Tractatus that logical form was basically propositional: atomic propositions correspond to full sentences, and they are combined with the sentential operators and, or, if...then..., not. In this view, it seems that `all Fs are G' could only correspond to an unanalysed sentence P, since it contains non of the sentential operators. But not all sentences P have the logical form of `all Fs are G': for example, some have the form of `Some Fs are G', or of Q \/ R, which both play a different role in inference. But P is certainly syntactically transformable into `all Fs are G', and it should be also clear that any valid inference in which P occurs remains valid when P is replaced by `all Fs are G' (this is enshrined in the property of invariance under syntactic transformation, there usually called substitution, of formal propositional logic). Thus, although not all of the inferential role of `all Fs are G' is explained by the role of P, some of it is. We may express this by saying that `all Fs are G' has the logical form of P, but not only the logical form of P.

Notice that syntactic transformation is a transitive relation: P is syntactically transformable into Q and Q is syntactically transformable into R imply that P is syntactically transformable into R; that it is not symmetric: P is syntactically transformable into Q does not imply Q is syntactically transformable into P; and that any complete sentence is a syntactic transformation of any sentence variable.

A Criterion for `Logical Form'

Back to Synopsis

We can say that S has the logical form of T if T is syntactically transformable into S. S and T may come from different (pseudo-)languages which share a part of their categorial grammar. This is directional: T must not have the logical form of S, as we saw in the last section. We can also say that S and T share logical form (symmetric) if S and T are minimally inferentially equivalent. It follows that S has the logical form of T if there is a minimal inferentially equivalent sentence U to S such that T is syntactically transformable into U.

This yields a criterion for logical form, namely the disjunction of these three conditions: S has the logical form of T if T is syntactically transformable into S, if S and T are minimally inferentially equivalent, or if there is a minimal inferentially equivalent sentence U to S such that T is syntactically transformable into U.

Given this criterion, it follows that The relation P has the same logical form as Q could then be defined as the `symmetric closure': P has the logical form of Q /\ Q has the logical form of P. It follows from the criterion for having the logical form of that the criterion for having the same logical form as simplifies: P has the same logical form as Q if there are minimally inferentially equivalent sentences to P and Q which are identical up to place-holder symbols.

The criterion allows the Russell example to have both the ToD logical form and the TVGT logical form. These two forms are, as we have seen, syntactically incompatible, and therefore we shall have to give up the notion of canonical logical form. However, I shall argue that there are independent reasons why the notion of logical form is not obviously a functional relation. I do not know how to sustain the notion that a sentence has a unique logical form, rather than many logical forms, against these difficulties.

In logic, there is a tradition that the logical form of a tautology is either unique, True, Frege's `the true', which also could be compatible with Wittgenstein's dictum that tautologies are meaningless (which would follow from a claim that True is itself meaningless); or that the form is some tautology of which it is a syntactic transformation, so, for example, the logical form of (~(P /\ ~P) \/ ~~(P /\ ~P)) is (Q \/ ~Q), the transformation being Q -> ~(P /\ ~P). Both of these conditions follow from the criterion. All tautologies are inferentially equivalent: the first condition follows because True contains no subexpression and thus is minimally inferentially equivalent to every tautology; and the second condition follows directly through syntactic transformation.

A Slight Change in Language

Back to Synopsis

Consider now the following case. It has been argued that some topological properties of time turn out to be logically necessary. Let T be the conjunction of these logically necessary properties. Then under the assumption that these are logically necessary, the following inference rule (*Time*) is valid:

A /\ T

The argument is this: logically necessary propositions are true under all circumstances; therefore if A were true, T would also be true in these circumstances; therefore (by /\-introduction) A /\ T would be true. QED.
It follows directly from /\-elimination that

A /\ T

is valid, and so A and A /\ T are inferentially equivalent. So, whether or not A contains any surface syntactical structure conforming to statements of these temporal properties, its role in inference is identical to that of A /\ T.

Suppose the normal inference rules of classical logic are assumed (Pra65). Then A and A /\ T, although inferentially equivalent, would not be interderivable using the rules of classical logic (let us call this rule set Cl) However, they are interderivable using the rules of classical logic augmented with the rule *Time* (the rule set Cl *union* { *Time* }, which I shall write as Cl + *Time*). A /\ T is minimally inferentially equivalent to A, but not vice versa, since A /\ T contains a subexpression to which A is inferentially equivalent, namely A itself. Neither can be derived from the other formally using just the rules Cl.

History and intuition would incline us to say that the logical form of the sentence A is A /\ T. I would explain the intuition as follows. There are various standard formal logics around, which explain certain forms of inference, and whose inference rules are widely taken to be valid such as classical propositional logic and classical predicate logic. These logics are pseuudo-languages with a precise set of permitted inference rules. As we have seen with the case of logically necessary temporal properties, some consider that there can be valid inferences which are not derivable from the set of inference rules of classical propositional or predicate logic. To widen the set of inference rules to allow these valid inferences, we generally have two possibilities: one is to add new inferences without hypotheses (generally called axioms), and the other is to add new inference rules such as *Time which do have hypotheses. The former case would be to add the axiom


from which the rule *Time* now follows as a derived rule in classical propositional logic:

A     (Hyp)
   T      (Axiom)
  A /\ T   (/\-intro)

The other option is to add the rule *Time* itself. For various formal reasons, the latter is nowadays preferred.

In claiming that the logical form of P(d), where d is a definite description, is (exists x)(d(x) /\ (forall y)(d(y) => y = x) /\ P(x)), we have already noted that these two pseudo-sentences should be inferentially equivalent. But since they are pseudo-sentences, containing meaningless place-holders, they are stipulatory objects. They have no independent existence as a natural language in order for us to observe that they are inferentially equivalent. The pseudo-language must therefore be equipped with inference rules that mimic the valid inferences in our natural language, and of course we get to stipulate these rules. If d is a predicate symbol, as in the Russell logical form sentence it is, then it cannot in first-order logic function also as a term, as it does in the former sentence. Part of Russell's point in explicating the logical form was to provide a form in the pseudo-language of first-order logic, whose role in first-order logic mimicked the role of P(d). Since this latter does not belong to the pseudo-language, the question does not arise as to what constitutes explicating the logical role.

In contrast, in the case of logically necessary temporal properties, A and A /\ T belong to the same pseudo-language, provided that pseudo-language incorporates the pseudo-language of propositional logic. (And of course A and A and T belong to the same natural language.) To convey exactly the pertinent information about the roles in inference of the natural sentences, we must stipulate such roles in the pseudo-language. I have been using a schematic letter A for an arbitrary sentence, and by the criterion of syntactic transformation the arbitrary natural language sentence I am talking about does indeed have this logical form; similarly for T, but here T is a rigid designator for an unknown and supposed logical necessity concerning time. The easiest way to reflect the valid inference encapsulated in *Time* is to stipulate that *Time be included in the pseudo-language inferences. This would ensure that minimal inferential equivalence in the pseudo-language conforms to minimal inferential equivalence in the natural language. Of course, we have to know exactly that T along with the addition of *Time* `works', that is, suffices to explain that part of our inferences which rests on logically necessary temporal properties, and this is a matter, ultimately, for empirical work: we propose properties and rules and see if they suffice to capture certain inferences and inferential equivalences. In order to explain roles in inferences following from the presumed logically necessary properties of time, then, we may say that A /\ T has the logical form of A, or simply that A and A /\ T are inferentially equivalent, or that A /\ T is minimally inferentially equivalent to A. Intuitively, we may have preferred to say that the logical form of A is (A /\ T), and believed ourselves to be in keeping with the spirit of Russell's ToD. I hope I have shown that we would be mistaken in this analogy, and trying to remain with this `intuitive' expression has no philosophical point to it that I can see. There is no disadvantage to changing our language in order to gain precision.

In both these cases, it has been proposed to perform complex syntactic transformations on the surface form in order to reduce the inferences to a known and agreed set, namely first-order predicate logic, rather than retaining logical form related to the surface syntax and devising inference rules to reflect the difficulties with non-referring singular expressions and necessary topological properties of time and so forth. Although these examples force the move to consider minimally inferentially equivalent sentences in the definition of logical form, and thus to lead to multiple logical forms for a given sentence, I will show that there are independent difficulties in developing a notion of `the' logical form, a canonical logical form, of a sentence. To simplify the argument and isolate it from the move to minimally inferentially equivalent forms, I shall consider the notion of syntactic transformation alone and show that it leads even so to multiple incompatible forms.

Difficulties with Attempting to Define a Canonical `Logical Form'

Back to Synopsis

The devices, operations and relations that I have described above on sentences of certain forms are very `real'. After all, I can write a sentence, and define a sentence to have the same type as another if they are orthographically similar in given ways (with some vagueness at the periphery - as experienced by anyone who attempts to read very many people's signatures). The criterion defined above of having the logical form of is nevertheless precise, and we may define having the same logical form as as its symmetric part.

May we start from the notion of `having the logical form of' already explained, and derive a canonical form, the logical form, of sentences or propositions? I can think of two ways of obtaining a unique logical form for sentences, and both have their difficulties. First, one could try to identify in a non-arbitrary way a canonical semi-formal sentence which implements all the logically-relevant details of the sentence's form. Second, one could identify all logical-formal correspondents to a given sentence, collect them into a set, and identify the logical form with that set, that `equivalence class':

Logical-form-of(P) == { Q | Q is a sentence /\ P has the logical form of Q }

The difficulty with the first approach could be construed as epistomological. There is broadly speaking a hierarchy of formal systems called logics, which are related in the hierarchy by their expressiveness. (Roughly, a logic is more expressive than another if there exists a semantics-preserving syntactic transformation of the second into the first. This is not a precise definition. A more precise, syntactically-based, definition may be found in the notion of theory translation of (TaMoRo71).) Thus are both predicate logic and simple linear-time temporal logic (S4.3) more expressive than propositional logic. Temporal predicate logic is more expressive than predicate logic. But the hierarchy is not strict. Predicate logic and simple linear-time temporal logic are expressively incomparable (for example, the latter is purely propositional, the former has no temporal operators). This situation yields two problems.

First, were the hierarchy to be strictly linear (that is, that every two logics in the hierarchy were directly expressively comparable), it would be relatively straightforward technically to define a canonical logical form for a target sentence. One can presume that there is a most-detailed corresponding sentence in each logic, that is, a sentence that is a syntactic transformation of all other sentences in the logic of which the target has the logical form. I presume also without further argument that there cannot be an unbounded variety of meaningful logical differentiations to make if the target sentence has only a finite number of symbols. Then because the target sentence has only a finite number of symbols, the corresponding most-detailed logical sentence becomes after some point the same as one goes deeper and deeper into the expressiveness hierarchy of logics. And in a strictly linear hierarchy, there would be only one path to follow to arrive at such a constant sentence. Therefore there would be only one (up to having the same logical form as) such constant sentence, and we could take this as the canonical logical form.

However, because the hierarchy is not strict, if one pursues this ever-deepening-form strategy, one must either end up with incompatible logical forms; or the constant sentence must be obtained before the hierarchy starts to `split', and so must be compatible with all possible expressively-richer logics. The former leads to a plethora of logical forms and therefore does not reach the goal, and the latter allows only propositional logical form, which is insufficient (the most-refined propositional logical form of the logical truth (forall x)(forall y)(x=y => y=x) is the propositional variable P, which is not a logical truth). More detailed attempts to counter this point fail (Footnote 11).

Then come epistomological objections. The history of logics has been the development of more and more `logics'. First came propositional logic and predicate logic, then modal logics, and we have temporal logics and type-theoretical logics, not to speak of higher-order logics. The expressiveness hierarchy of logics is constantly being added to. Not only that, but it is arguable whether some of the features of these `logics' are indeed logical. For example, use of `simple temporal logic' (basically the tense logic S4.3) requires that time is linear; that any two times are comparable in terms of `earlier' and `later'. Physicists may doubt this. The most well-known good argument for the a priori determinate structure of time is probably to be found in Kant, and such arguments are nowadays mostly found wanting (Footnote 12). So a definition of logical form by search through the hierarchy of existent formal logics falls prey to the twin epistomological problems: how do we know we have `enough' logics?; and some of the features of the formal logics that we do use are not obviously logical.

Let's therefore consider the second approach to trying to obtain a unique logical form. In order to choose a logical form to be the logical form, one could try to assimilate all the formal information contained in the various different `logical forms' in the different logics into one. There is a standard way to do this in the framework of set theory and mathematics. To illustrate the difficulties, let's consider this standard method, that of defining an equivalence class of semi-formal sentences which is to be identified with the logical form, something like

Logical-form-of(P) == { Q | Q is a sentence /\ P has the logical form of Q }

It may require a certain belief in the reality of `mathematical objects' (a `Platonist' belief) to construct equivalence classes of sentences. But even so, I think difficulties persist no matter what the underlying explanation of mathematics. Accordingly, I shall temporarily assume a mathematical-realist mode of expression, and leave the translation into other accounts to the reader's whim. We are assuming that there are many semi-formal languages in which a logical form for a target sentence may be identified, and we wish to assimilate each of these logical forms into one mathematical object which we shall identify with `the' logical form. The construction is simply to form the class of all sentences that are syntactically transformable into the target sentence, and identify this class with `the' logical form.

The problem is that this class is not unique, and therefore cannot serve as the canonical representative of logical form. The reason is as follows. Each semi-formal language would have its own notion of logical form, but yet we wish our notion to range across languages. The equivalence class contains sentences from all languages, so it is something whose constituents are contingent. Its precise nature depends on the languages that happen to be around. Any time we would invent a new language, the notion of logical form would literally change: sentences of this new language would be included into the equivalence classes, and the equivalence classes would be literally not the same mathematical objects as before. Whatever the notion of logical form may be, one hopes that its constituents are not contingently constituents.

In the face of this difficulty, one could propose that while the equivalence class is not the logical form, nevertheless at the very least it provides an identity criterion for the logical form: namely, the logical forms of two target sentences are identical when they correspond to identical equivalence classes. And no matter that constituents of these classes may be contingently so, the classes remain invariantly identical or different. However, this move to an identity criterion does not work: the identity is not invariant under changes to the collection of languages being used. One could imagine adding a language that discriminates between two target sentences that were not heretofore discriminated: consider the move from propositional to predicate logic, by which the two (pseudo-)sentences (forall x)(forall y)(x=y => y=x) and P, which have the same propositional form, are discriminated.

So an attempt to define a canonical logical form has its difficulties. Nevertheless, I may achieve all that I need to here by means of the binary relation A has the logical form of B, without trying to find a canonical logical form for each sentence (Footnote 13). As Wittgenstein said, whatever the logical form may be, it seems it may be shown, but not so easily written.

The `Picture' Form of a Correspondence Theory

Back to Synopsis

We have seen that semi-formal languages, `pseudo-languages', which include `place-holder' symbols, enable one to `show' logical form but in a more precise way than in natural languages. This works through the mechanism of having the same logical form as the natural-language sentences but with highlighting, so to speak, via place-holder symbols. We also found a few puzzles when trying to determine if there could be a canonical logical form of a given sentence. I concluded that it is preferable to use the more basic relational expression having the logical form of, rather than the notion of a canonical logical form. Although this substantiates one interpretation of Wittgenstein's claim about showing, not stating, it may seem to conflict with the other, which is that propositions or statements conveyed by sentences picture states of affairs.

This conflict would arise if one were to suppose, like the early Wittgenstein or Russell, that a state of affairs has a particular form which it shares with a proposition `picturing' it. That is, that there would be a canonical form of the state of affairs, and that this is shared in common with the sentence or proposition. That would then imply, despite our considerations above, that this canonical form of a proposition is there to be discovered, even if we haven't found it yet. However, a simple observation rescues us from this misapprehension. If a proposition may have many logical forms, so may states of affairs. The revised picture theory would propose that a state of affairs shares all its logical forms with a proposition `picturing' it. Thus a picture theory need not presuppose the existence of a canonical form.

Although the `picture' idea is compelling, and is compatible with multiple logical forms, its intuitive plausibility nevertheless breaks down as we consider compound propositions. For example, suppose A expresses a proposition `the cat is on the mat'. Then ~A expresses the proposition that the cat is not on the mat. While we could believe that A corresponds to a picture, its negation corresponds to a variety of spatial states of affairs: the cat may be in the bathroom, outside, on the bed, anywhere but on the mat. One could `picture' the mat as devoid-of-cat, but this would be a picture which has no cat in it. If the corresponding proposition were nevertheless to say something about a cat, the question would arise why it would not explicitly contain reference to other things that aren't on the mat: since there are arbitrarily many of these (John F. Kennedy, my shoes, the city of London, .....) they cannot all be mentioned in the corresponding proposition. (I guess one could consider this a form of `frame problem'.) One may conclude that the proposition corresponding to the devoid-of-cat mat does not contain an explicit reference to the cat, and therefore cannot be ~A, which explicitly contains `cat' as a subexpression. Similar considerations hold for propositions formed by disjunction: A \/ the cat is on the bed. The cat cannot be in two places at once: accordingly, what single picture can correspond to the proposition?

Such difficulties were known to Russell and Wittgenstein. Russell considered only atomic propositions, propositions of a special sort, to obtain their meaning by `picturing'. `Pictures' corresponding to complex propositions are not representations of atomic states of affairs, but rather representations of how states of affairs might be, were the states of affairs represented by the atomic components to obtain. This representation was given by truth tables. The form common to the pictures and the proposition was the logical form, which for Wittgenstein was shown by truth tables, as opposed to the usual symbolic notation which we use nowadays. There was thus no difficulty to explain the meaning of truth-functionally compound assertions in terms of the atomic assertions, which obtained their meaning by unmitigated correspondence with states of affairs. There were, however, difficulties in determining what were the atomic propositions, corresponding to `atomic' states of affairs. Wittgenstein, for his part, considered the `picture' of a complex state of affairs simply to be given by the truth table. This extends the notion of picturing beyond the simple similarity of blocks on the table to cars on the road. Eventually both Russell and Wittgenstein abandoned the development of this specific doctrine of logical atomism (Footnote 14).

I prefer to think of the `picture' of a proposition as being given, as in mathematical model theory, by stating what objects are visible (a mathematical model does not include all possible objects, but stipulates a specific set, over which the quantifiers are interpreted to `range'), and what predicates hold over, and relations between, `visible' objects (see again (Footnote 6)). Then, the proposition states a logically complex assertion, and in order for this assertion to succeed (the proposition to make a statement and this statement to be true), the picture has to be a certain way. Wittgenstein justified extending the `picture' correspondence to all assertions by assimilating propositional-logical combination to picturing. However, we may prefer to consider these syntactic operations, as we have done, in another form, that of logical inference, from the basic correspondence involved in picturing. Construing these operations as syntactic has allowed us also to handle some cases, that in which a referring term does not refer, which could not be handled by any `picturing', since there is no explanation of how one may develop a `picture'-like correspondence for a non-referring term.


Back to Synopsis

I have considered the definition of logical form as that structure of sentences or propositions which accounts for its role in inference. I have also briefly considered the Wittgensteinian view that logical form is what a proposition shares with a state of affairs that it correctly describes.

I explained the Wittgensteinian view that logical form could be shown, but not stated, by explaining how it could be shown (although I offered no proof that it could not be stated, only the puzzles attendent upon so trying).

I considered relations under which

  • sentences performed equivalent roles in inference (inferential equivalence);
  • a sentence performed a subrole of the inferential role of another (syntactic transformation))
and used these to propose a criterion for the relation having the logical form of. I note that Davidson holds a similar view (Footnote 15).

This criterion allows that a given sentence may have the logical form of more than one (syntactically incompatible) other sentence. This cast doubt of the notion of canonical logical form. However, that may have been an artifact of my criterion, and thus a ground for criticism of the criterion itself. I then showed that the phenomenon that a sentence may have more than one logical-form correspondent followed from more general considerations than my criterion, namely by considering just that syntactic transformation could show logical form (which it surely can). I considered various ways that one could produce the logical form of a sentence/proposition A, namely a unique sentence/proposition B such that A has the logical form of B, and found these ways all unsatisfactory. However, working with the binary relation A has the logical form of B seems to be relatively unproblematic. One merely has to give up the presumption that it is functional (in the technical sense of having only one correspondent).

I also considered an apparent anomaly, that some in cases in which there are logically necessary properties that must be presumed by propositional structure but that does not appear in the surface form, my criterion of logical form ascribes the relation `the wrong way round'. I concluded that nothing of philosophical significance could possibly follow from this, and that the precision of the criterion was more than enough reason to change linguistic habits: the anomaly is in the habit rather than caused by the criterion.

I also considered sporadically what parts of a `picture' theory could be retained, and where picturing fit in to the clarification of logical form. I make no great claim that this illuminates Wittgenstein, but rather that it clarifies how far a simple correspondence explanation may be used. This will be shown in (Lad97.4) to have consequences for the notions of abstraction and modelling.

It seems that the notion of logical form is thus fundamentally a binary one, that the idea of a `canonical' logical form for a sentence, proposition or statement leads to problems and seems to bring no particular advantage. And that the notion of logical form as clarifying the role played in inference may be partially explained using the notions of syntactic transformation and inferential equivalence.


Back to Synopsis

(Aud95): Robert Audi, ed., The Cambridge Dictionary of Philosophy, Cambridge University Press, 1995. Back

(Bla94): Simon Blackburn, The Oxford Dictonary of Philosophy, Oxford University Press, 1994. Back

(Dav70): Donald Davidson, Action and Reaction, Reply to essays by Hedman and Cargile in Inquiry, Summer 1970. Reprinted in (Dav80). Back

(Dav80): Donald Davidson, Essays on Actions and Events, Oxford University Press, 1980 Back

(Lad97.4): P. B. Ladkin, Abstraction and Modelling, Technical Report RVS-RR-97-04, available at Back

(LeP90): Robin Le Poidevin, Relationism and Temporal Topology: Physics or Metaphysics? Philosophical Quarterly 40: 419-432, 1990. Reprinted in (LePMcB93). Back

(LePMcB93): Robin Le Poidevin and Murray McBeath, eds., The Philosophy of Time, Oxford University Press, 1993. Back

(New80): W. H. Newton-Smith, The Structure of Time, Routledge, 1980. Back

(Pra65): Dag Prawitz, Natural Deduction: A Proof-Theoretical Study, Almquist and Wiksell, Stockholm, 1965. Back

(Ric96): Thomas Ricketts, Pictures, logic and the limits of sense in Wittgenstein's Tractatus, in (SluSte96) Back

(SluSte96): Hans Sluga and David G. Stern, eds., The Cambridge Companion to Wittgenstein, Cambridge University Press, 1996. Back

(Swi81): Richard Swinburne, Space and Time, Macmillan, 1968, 2nd. edn. 1981. Back

(TaMoRo71): Alfred Tarski, Andre Mostowski and Raphael Robinson, Undecidable Theories, North-Holland, 1971. Back

(Wit22): Ludwig Wittgenstein, Tractatus Logico-Philosophicus, German+English, trans. C. K. Ogden, Routledge, 1922; also trans. D. F. Pears and B. McGuinness, Routledge & Kegan Paul, 1961. In German only, Suhrkamp Taschenbuch 501, 1984. Back

(Wol89): Sybil Wolfram, Philosophical Logic: An Introduction, Routledge, 1989. Back


Back to Synopsis

Footnote 1:
It is also interesting to note that the view held by some engineers towards claims that program verification is in principle impossible is justified by arguments that may appear to be close to those favored by the later `language-game' Wittgenstein.

Footnote 2:
If the sentence tokens are written in typeface, types are considered modulo capital letters and line breaks. With different typefaces, one can construct a mapping between the characters and obtain types again modulo capital letters and line breaks. But two `identical' imprints of the same character are not really orthographically identical - there will be microscopic differences in the formation of the letters. This is not really a problem. The type was created somehow, either by a physical object (in the old days of leaden type) or by the execution of a sequence of actions at a particular point on and by a computer (in today's world of laser printers). Generally, text is an intentional object, so some considerations about the intention of the author/publisher/typesetter may also be invoked. I take it, then, that the notion of occurence of the `same' character in machine-generated text is relatively unproblematic. The notion of `same' character in handwritten text is harder to recognise, and one may rely more on recognising token words or phrases than characters; the intentional nature of the text plays a greater role. This creates no further philosophical problems, since we may choose to make the distinctions just for machine-generated text if we wish.

Footnote 3:
The examples of the inferences in the definition from (Bla94) suffice to show that the form described is (part of) logical form. The inferences with `Some Fs are G' would be invalid: the conclusion would not be guaranteed to be true if the hypotheses were to be true. This form is similarly to be distinguished from that of `All mortals are men' and `Some mortals are not men', and so forth. Thus I shall take it as uncontroversial that the form indicated here is part of logical form in the second sense.

Footnote 4:
Also (almost) a thesis of the later Wittgenstein (SluSte96): we can invent `language-games', that is, processes that involve the use of language. Wittgenstein also held that these processes must be social (the so-called `private-language argument'). There are some constraints about what shall constitute a `language-game', that need not concern us.

Footnote 5:
A binary relation R is functional if and only if for each element h in the domain of the relation, there is a unique element j in the domain such that h R j (in infix notation; written R(h,j) if one prefers prefix notation). The domain of a relation is the collection of all things that can either be in that relation to something else, or to which there is something in that relation.

Footnote 6:

"For an affirmative or non-style (i.e. partly affirmative) proposition to express a truth it must satisfy several conditions [...]:
that every referring expression ([...] expression whose meaning does not determine what it refers to) secures reference;
that successfully identified objects exist at appropriate times;
that objects stand in the relations in which they are said to stand, have the properties, etc. that they are said to have."
(Wol89, p185).
These conditions hold for both the Theory of Descriptions and the Truth-Value-Gap Theory of non-referring singular terms, discussed below.

Footnote 7:
The surface syntax of a sentence is the syntax that the token sentence has. For semantical purposes, such as for Russell's Theory of Descriptions, the sentence may be regarded to have another syntactical form which more closely corresponds to what is taken to be its meaning. Such a form is often known as `deep structure'. The terms originally came from Chomsky's writings on Transformational Grammar.

Footnote 8:
According to one textbook, an argument is valid if, whenever all its premises are true, its conclusion is true also (Wol89, p11); an argument is sound if it is a valid argument with true premises (Wol89, p13). A sound argument must thus have a true conclusion. However, I prefer to phrase the definition of validity using the subjunctive, because part of its meaning is counterfactual. See Footnote 9.

Footnote 9:
The subjunctive grammatical form of expression is necessary to indicate the counterfactual nature of the definition. Without the counterfactual, the definition would read

if all the hypotheses are true, the conclusion is also true
which is true in case either one of the hypotheses is false, or the conclusion true. If this definition were to be used, then the inference

The King of France is bald
The King of France is bald

would be invalid on the Truth-Value Gap theory, since neither the hypothesis is false nor the conclusion true. I wish to allow all inferences of the form


to be valid.

Footnote 10:
A categorial grammar classifies sentence elements into distinct, disjoint sets, called categories. Grammatical rules specify in terms of these categories how items from these categories may be combined to form grammatical phrases. For example, we may have as categories nouns, verbs and articles, and in each category numerous different words. We may specify that a verb followed by an article followed by a noun forms a verb-phrase, and a noun followed by a verb-phrase forms a sentence.

Footnote 11:
More complex theory translations do not suffice. One may suppose for example that what can be expressed in `simple temporal logic' can equivalently be expressed in first-order sorted logic with an explicit sort for times and an appropriately-axiomatised ordering relation. Propositional and predicate symbols in the temporal logic sprout an extra argument for `time' (the `timestamp'). A sentence such as []P, intended to mean always in the future, P, whose truth is evaluated at timepoint t (`possible world' t in the Kripke terminology) is transformed into (forall t' > t) P(t'); where P(t') stands for the transformation of P in which all atomic predicate symbols sprout the extra argument place and fill it with t'. The relation > may or may not be reflexive, but it is generally held to be a- or anti-symmetric and transitive. Such a translation is plausible and natural, and in fact often used in applied theories of time. However, it is not a logical-property-preserving tranformation in general: it is well-known that there are properties of modal logics which cannot adequately be represented in any first-order logic formulation. One might reply that although it is true in general, for a particular tense logic it is in fact the case that all relevant properties can be so represented. But I doubt this claim in general; and it would also have to be established why the particular tense logic chosen is the tense logic, which means one would have to provide an a priori argument why the particular time structure is the structure of time: and this argument must be an argument concerning logic, not physics. I doubt whether any such argument could be successfully given and my reasons - or rather, other people's reasons - lie in Footnote 12 immediately below.

Footnote 12:
This opens up a hornet's nest - or, if the reader would prefer, a whole theme in modern philosophical logic (if you believe one side), or metaphysics (if you believe another), or physics (if you believe the empiricists). It has been argued that that time has a particular topology as a matter of logical necessity, cf., Time [...] being of logical necessity unique, one-dimensional, and infinite, has of logical necessity a unique topology (Swi81, p209), or Time [...] is of logical necessity unbounded (Swi81, p207). Such arguments have been thoroughly investigated and countered in (New80), who also provides arguments against Prior (on closed time), Quinton (on unity) and Aristotle (on boundedness). The history is summarised in (LeP90) as follows:

Intriguingly, there is a whole range of questions, the orthodox view of which has undergone a complete reversal. Examples of such questions are: `Does time have a beginning and/or end?', `Is time branching or non-branching?', Does time have a linear or a closed structure?', `Is time dense or discrete?', Questions of this kind concern the topological structure of time, and were once regarded as susceptible only to a priori argument. In the Physics Aristotle argues that time is of necessity unbounded and dense. In the Treatise Hume argues that (given his particular brand of empiricism) time is of necessity discrete. The Aristotelian attitude lingers on in some modern writers such as Prior, who finds incoherence in the notions of branching and non-unified time, and Swinburne, who similarly objects to the notions of bounded and closed time. But these views are the exception in modern debate. Since Gödel pointed out that the results of General Relativity allowed for non-standard temporal topologies, the philosophical orthodoxy has shifted towards regarding time as having its topological properties only contingently. Grünbaum, for example, in his development of the causal theory of time, explicitly allowed for the possibility that time might be closed rather than linear. And the view that theories concerning the topology of time are empirical has recently been forcefully expressed in William Newton-Smith's book [(New80)].
Maybe needless to say, it doesn't matter what the orthodoxy is, just what's correct, and since what's correct must be justified, our evidence for what's correct depends on the quality of the justification provided. Since justification involves mainly reasoning, one can hope that quality of justification covaries with time, as argument and critique are cumulative: new arguments are considered and analysed by more and more people. But of course this is only a hope: mistaken arguments are cumulative, also. And the history of philosophy yields periods during which people have been convinced by arguments that were later rejected in favor of more thorough analyses of older arguments.

Poidevin's comments address the question of whether the structure of time is a priori, and not necessarily whether it is so of logical necessity. To identify the two themes would require the thesis that the only a priori truths were logical necessities. Kant, for example, would not subscribe to this thesis. However, authors such as Prior (and, as we have seen, Swinburne) do hold that certain topological properties of time were logical.

Footnote 13:
John Corcoran (Aud95, entry on logical form) maintains there is a unique logical form, and calls what I have called `logical form' a schematic form of a sentence. However, he bases his definition of logical form on the notion of a logically perfect language, and how one obtains the relation between a natural or semi-formal language and a logically perfect language is not clarified. In a logically perfect language, all names name simply, all singular terms have determinate referents, and the interpretation of all the primitive symbols is logically uncomplicated. In other words, all the philosophical problems in this area have been solved. Therefore, it remains to be seen whether there are, or can be, logically perfect languages at all, and therefore whether his definition of logical form is non-vacuous. At the least, since the philosophical problems with logically-unperfect languages remain, the translation into a logically perfect language must contain the solution to the problems. Since he doesn't explain how this translation may be made, he thus does not explain how a natural language sentence, or a sentence in a semi-formal language such as I have talked about, can have a logical form, nor does he provide a proof of his claim that logical form is indeed unique (nor can he: there are translations of natural-language sentences into two incompatible logically-perfect languages, thereby yielding two different logical forms, according to his definition - see Footnote 12, above). He also does not explain why he makes the requirement on the notion of logical form that it be unique. He has therefore not established a coherent use of his terminology and I have thereby little reason for needing to conform with it.

Footnote 14:
Two explanations of what the phrase means:

Russell referred to all his philosophy after 1898 as logical atomism, indicating thereby that certain categories of items were taken as basic and iterms in other categories were constructed from them by rigorous logical means. [...] the label is now most often applied to the modified realism Russell held from 1905 to 1919. (Aud95, entry on Russell, p701)

logical atomism
The philosophy of the Tractatus Logico-Philosophicus of Wittgenstein, and the paper `The Philosophy of Logical Atomism' by Russell (1918). Both share the belief that there is a process of logical and philosophical analysis of language which ultimately terminates in `atoms' of meaning. To such atoms correspond elements in states of affairs or facts, so the process reveals the basic metaphysics implied by our language, or, in the case of Wittgenstein, by all possible languages (since the process of analysis reveals what must be the case for picturing, or meaning, to be possible). [...] (Bla94)
I use the term `logical atomism' in accordance with the first explanation. However, I would prefer a relativist view: what is taken as basic depends on what one is trying to explain. I don't think I subscribe to the view that there are simply `basic' sorts of things.

For Russell, these `basic' states of affairs were perceptual data, so-called `sense data'.

sense data:
Literally, that which is given by the senses. But in response to the question of what exactly is so given, sense data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture show which itself only indirectly represents aspects of the external world [..]. The view has been widely rejected as implying that we really see only extremely thin colored pictures interposed between our mind's eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lens, simply responsible for producing private images, and stress the active life of the subject in the world as the determinant of experience [..]. (Bla94)

For Wittgenstein, these atomic facts were to be logically independent, that is, for any pair of them, there must be four different possible states of affairs according to the four combinations of truth values a pair of propositions can have (each individually has two, combined that makes four). The reason for this requirement was to enable the explanation of the fact-correspondence for compound propositions in terms of truth tables formed by combining the atomic components of the compound in the `shown' way.

Now, it's not at all clear that sense-data propositions, even simple ones, could be truth-value independent. Let's suppose that `seeing red' (no forms, just a uniform visual sense of red, all over) is an atomic sense-datum. It's difficult to see how to get more `atomic' with respect to visual sense. Similarly, `seeing yellow', which seems to be a different proposition. My field of vision can't be red-all-over and yellow-all-over at the same time, which rules out one truth-value combination. It can be red-all-over and not yellow-all-over, vice versa, or neither, which allows the other three combinations, and incidentally shows the propositions can't be identical (else only two out of four would be possible). But three out of four won't do for truth-value independence - one needs four out of four. So these can't be `atomic' propositions/states of affairs as Wittgenstein is searching for. But if these seemingly simple-as-possible sense-data aren't `atomic' sense-data, then what on earth could be? No reasonable alternative has been forthcoming. So much for the sense-data theories of atomicity.

Wittgenstein seemed to abandon the search for atomic, independent propositions as he began to get more concerned with the role that language played in creating apparently-philosophical problems. But there is much in common between his Tractatus view and his later views, as now generally recognised (see, for example, various essays in (SluSte96)).

Footnote 15:
After substantially completing this essay, I read Davidson's essay (Dav70), reprinted in (Dav80). Davidson holds that logical form is relative, at least to the logical language in which it is expressed (I go somewhat further, but the difference seems to be simply definitional). He also holds that determining the role of a sentence in inference is a criterion of logical form. In this passage from (Dav80, p140), Davidson acknowledges the relativity of logical form and that a criterion for logical form is determining the role of a sentence in entailments:

.... I am happy to admit that much of the interest in logical form comes from an interest in logical geography: to give the logical form of a sentence is to give its logical location in the totality of sentences, to describe it in a way that explicitly determines what sentences it entails and what sentences it is entailed by. The location must be given relative to a specific deductive theory; so logical form itself is relative to a theory. The relativity does not stop here, either, since even given a theory of deduction there may be more than one total scheme for interpreting the sentences we are interested in and that preserves the pattern of entailments.
Back (to Footnote 15 ref.)