A Semantics for the Predicate Calculus

 

(1) The Propositional Calculus again

 

In the propositional calculus, we abstract away from the internal structure of sentences and deal with Ps and Qs. The semantics we offer in terms of truth tables gives us the conditions under which a complex formula would be true in terms of the truth values of the component Ps and Qs, but we are not offered an account of the truth conditions of the Ps and Qs themselves. In other words, truth tables tell us about the logical constants - sentence connectives - but not about the sentences which they connect.

 

This is not a failing of the propositional calculus, for it is not designed to do otherwise. But it does mean that

 

(i) truth tables cannot provide a semantics for simple sentences; and

 

(ii) not every relation of validity can be expressed in the propositional calculus.

 

Our standard example (slightly modified) is:

 

(iii) All Frenchmen are green;

       Tony Blair is a Frenchman;

       therefore, Tony Blair is green.

 

This is a valid argument, but its validity does not rest upon a truth table interpretation of the logical constants, for no logical constants occur in the argument.

 

(2) A little bit of syntax (revisited)

 

The predicate calculus is a formal language. We stipulate as follows:

 

Symbols

(i) Names: a, b, c,…

(ii) Predicates: F, G, H,…

(iii) Variables: x, y, z,…

(iv) Logical constants: ¬, &, v, →, ↔

(v) Quantifiers: ", $

 

Rules

(i) If <a1, a2,… an> is a sequence of names and f is a n-degree predicate, then f(a1, a2,… an) is a wff.

(ii) If <x1, x2,… xn> is a sequence of variables and f is a n-degree predicate, then f(x1, x2,… xn) is a wff.

(iii) If j is a wff with at least v as free variable and Q is a quantifier, then Qvj is a wff.

(iv) If j is a wff with at least has a name and Q is a quantifier, then Qh/vj is a wff.

(v) If X, Y are wffs, then ¬X, X & Y, X v Y, X → Y, and X ↔ Y are wffs.

It will be noted by rules (ii) and (iii) that there will be wffs that are neither true nor false. These are open sentences: wffs that contain at least one variable that is not bound by a preceding quantifier. For example, ‘Fxy’ is an open sentence with two free variables; ‘("x)(Fxy)’ is an open sentence with one free variable.

 

(3) Tarski’s problem

A consequence of the above is that we cannot directly recursively specify truth conditions for formulae of the predicate calculus, for the formulae are made up of parts which themselves are neither true nor false.  This is unlike the propositional calculus, were we deal with Ps and Qs alone.

 

In other words, the propositional calculus is truth functional because the value of each formula is a function of the truth value of its component formulae. An individual formula is a truth function of itself. The whole assignment of values, therefore, is recursive over truth values.

 

The problem with the predicate calculus is that while we want assignments of truth values to its sentences, the assignment can’t be recursive over truth/false. For example, we can’t show how the truth value of a conjunction is a function of the truth values of its conjuncts, for there is no way to assign truth/false directly to components that are open sentences. If the predicate calculus contained just a finite number of formulae, then we could recursively assign truth values just as if we were dealing with the propositional calculus. But there are an infinity of formulae, each one containing sub-formulae which are not truth evaluable. How, then, can we assign values? 

 

Tarski’s great achievement was to show how we could recursively assign not truth values, but values of a different kind - satisfaction by sequence - that apply uniformly across all items of the language, not just sentences: truth/false will drop out as special cases of satisfaction. 

 

The Way of Tarski: Satisfaction by Sequence 

 

Some definitions:

 

Model: An infinite set of objects and a relation to map selections from the set onto our wffs.

 

Sequence: An infinitely ordered selection from the set of objects. There are as many sequences as their are ordered arrangements of the objects of the model - an infinity. Formally, we may say that for any sequence S, there is a further sequence S*, such that S* »i S, i.e., S* differs from S in at most its ith place, for any ordinal i.

 

For example, let S1 be the sequence:

 

S1: <5, the Queen, Jupiter,…>.

 

S1 differs from the following sequences in, respectively, its first, second, and third places:

 

S2: < UEA, the Queen, Jupiter,…>.

S3: <5, UEA, Jupiter,…>.

S4: <5, the Queen, UEA, …>.   

 

Therefore: S1 »1 S2        

                         S1  »2 S3        

                         S1 »3 S4

 

It might be noted that the relation ‘»’ is reflexive, i.e., each sequence differs from itself in at most its ith place, for any ordinal i. For the relation to hold, it is not necessary for there to be a difference, it is only necessary that if is there is a difference, then it is in at most one place. In particular, of sequences that don’t differ at all (that is, we just have one sequence), it is true to say that they differ in at most one place. It is vital that you don’t confuse ‘at most one’ with ‘at least one’. The former phrase means ‘not more than one’, and zero is not more than one. The latter phrase just means ‘one or more’, but zero is not one or more. We speak, then, of ‘a difference in at most one place’ to cover both cases: where there is no difference, and where there is a difference just in the one place. Simple and beautiful.

 

(1) The semantics: Satisfaction

For the semantics of satisfaction by sequences to work, we must index variables to positions in sequences. This is just a notational convenience. It is not that each variable carries its index with it, as it were; rather, the indexing is simply our way of relating a given formula with free variables to a range of sequences. As we shall see, the indexes are a ladder we can kick away, once we have climbed up.

 

 To make things simple, let us imagine a language in which there are just two predicates ‘sleep’ ‘love’. Thus, there are two open sentences (with, for the sake of simplicity, variables indexed to the first and second positions): ‘sleep(x1)’ and ‘loves(x1, x2)’.

 

Now let us introduce a relation SAT (satisfaction), which holds between formulae and sequences. The definition of SAT will essentially provide the semantics for the language.

 

(i) SAT[S, sleep(x1)] iff s1 sleeps, where s1 is the first object of S.

 

(ii) SAT[S, loves(x1, x2)] iff s1 loves s2, where s1 and s2 are respectively the first and second objects of S.

 

From the definition of sequences, for any object of the model, s, there is some sequence S, such that  s is in the1st position of S; there is some other sequence S, such that s is in the 2nd position of S; there is still some other sequence S, such that s is in the 3rd position of S; and so on. Consequently, to specify a satisfaction condition on a given position of any sequence is to specify a condition on every object.

 

Thus, (i) says that any sequence whose first member sleeps satisfies ‘sleep(x1)’, i.e., every object that sleeps satisfies ‘sleep(x1)’, i.e., the set of sleepers is the semantic value of ‘sleep’.

 

Similarly, (ii) says that any sequence whose first member loves the second member satisfies ‘loves(x1, x2)’, i.e., every pair of objects such that the first members love the second members satisfies ‘loves(x1, x2)’, i.e., the set of pairs such that the first member loves the second is the semantic value of ‘love’.

 

This adequately captures the semantics.

 

(2) Satisfaction and complex open sentences

Satisfaction also applies to complex formulae formed from open sentences via the logical constants. Here, the constants have their usual truth table interpretations defined over sequences.

 

(i) SAT[S, ¬sleep(x1)] iff it is not the case that s1 sleeps.

(ii) SAT[S, sleep(x1) & sleep(x2)] iff s1 sleeps and s2 sleeps.

(iii) SAT[S, sleep(x1) v sleep(x2)] iff either s1 sleeps or s2 sleeps.

(iv) SAT[S, sleep(x1) → sleep(x2)] iff it is not the case that s1 sleeps and s2 doesn’t

       sleep.

(v) SAT[S, sleep(x1) v sleep(x2)] iff either s1 sleeps and s2 sleeps or s1 doesn’t

      sleep and s2 doesn’t sleep.

 

Consider the following:

 

(i) SAT[S, sleep(x1) & ¬sleep(x1)] iff s1 doesn’t sleep and s1 sleeps.

(ii) SAT[S, sleep(x1) v ¬sleep(x1)] iff either s1 sleeps or s1 doesn’t sleep.

 

(i) says that a given sequence satisfies ‘sleep(x1) & ¬sleep(x1)’ whose first member both sleeps and doesn’t sleep. But there is no such sequence, for there is no object - a candidate first member - which both sleeps and doesn’t sleep. Thus, the formula is satisfied by no sequence. This is like writing all Fs in the main column of a truth table.

 

(ii) says that a given sequence satisfies ‘sleep(x1) v ¬sleep(x1)’ whose first member either sleeps or doesn’t sleep. But every sequence has a first member that either sleeps or doesn’t, for every object either sleeps or doesn’t. Thus the formula is satisfied by every sequence. This is like writing all Ts in the main column of a truth table.

 

(3) Sequences rather than sets

It might be wondered whether infinite sequences are required; could we not deal just with objects and sets thereof? The answer is ‘No’.

 

Sequences differ from sets in that the former are ordered:

 

{1, 2, 3} = {3, 2, 1}

<1, 2, 3> ≠ <3, 2, 1>

 

Take two formulae:

 

(i) x1 > x2

(ii) x2 < x1

 

Considered individually, each formula could be assigned a value from a direct assignment of values from the model to each component variable. For example:

 

(iii) 7 > 6

(iv) 4 < 5

 

In other words, trivially, there is some assignment under which (i) is true/satisfied and another assignment drawn from the same set under which (ii) is true/satisfied.  But we are after a recursive assignment of values such that the value of each complex formula is a function of the values of its components. Consider, then, the conjunction:

 

(v) x1 > x2 & x2 < x1

 

The assignments in (iii) and (iv) don’t produce the right value here, for the variables have to be assigned values uniformly, i.e., x1 and x2 have to take the same values across the conjunction. Thus, (v) is satisfied by no sequence, for there is no sequence whose first member is both greater than and less than its second member.

 

This is precisely what we want, for if a conjunction is satisfied by a sequence, then its conjuncts must be satisfied by the same sequence, and vice versa. If we employed direct assignments to individual formulae, then we would need a distinct assignment for a complex in which the formulae occur, and so we would forgo a recursive assignment.

 

The satisfaction conditions for each component of a formulae enter into the satisfaction conditions of the formulae itself, and the satisfaction conditions of each formulae is wholly determined by the satisfaction conditions of its components. 

 

(4) Eliminating names

Our language contains names. It would, however, streamline our procedure if we could eliminate names from our symbolism, to leave just predicates, variables, constants and quantifiers. Fortunately, this is easy to achieve.

 

For each formula of the type ‘Fa’, we have the equivalent formula:

 

(i) ($x)( x = a & Fx)

 

Thus:

 

(ii) Bill sleeps iff there is an x such that x is Bill and x sleeps

                        iff something is Bill and that thing sleeps.

 

We still have the name ‘a’ here, but we can think of it as part of the predicate ‘= a’ which is uniquely true of a. Thus, (i) has the canonical form

 

(iii) ($x)(Ix & Fx)

 

(5) Satisfaction and quantifiers

Binding variables by quantifiers has the effect of eliminating their indexes.

 

(i) SAT[S, ($x)jxi] iff ($S* »i S)(SAT[S*, jxi])

(In English: Sequence S satisfies ‘($x)jxiiff some sequence S* which differs from S in at most its ith place satisfies the open sentence ‘jxi’.)

 

Let us understand this general definition in terms of our simple language.

 

(ii) SAT[S, ($x)sleeps(x1)] iff ($S* »i S)(SAT[S*, sleeps(x1)]).

 

This says that a given sequence S satisfies ‘($x)sleeps(x1)’ just so long as some sequence S* which differs from S in at most its 1st place satisfies ‘sleeps(x1)’. Well, what condition does this place on the sequences which satisfy ‘($x)sleeps(x1)’? Merely: any sequence will satisfy the formula so long as one satisfies the open sentence.

 

To see this, consider the following sequences:

 

S1: <the Queen, 7, UEA,…>

S2: < √2, 7, UEA,…> 

S3: <the Grand Canyon, 7, UEA…>

 

SAT[S1, ($x)sleeps(x1)] because S1 »1 S1 and S1 satisfies the open sentence ‘sleeps(x1)’.

SAT[S2, ($x)sleeps(x1)] because S1 »1 S2 and S1 satisfies the open sentence ‘sleeps(x1)’.

SAT[S3, ($x)sleeps(x1)] because S1 »1 S3 and S1 satisfies the open sentence ‘sleeps(x1)’.

 

Does the sequence

 

S4: <7, 7, 7,…>

 

satisfy the formula ‘sleeps(x1)? Yes, because S5 differs from S4 in at most its first place and S5 satisfies the open sentence:

 

S5: <the Queen, 7, 7,…>.

 

Does this get the semantics right?

 

Yes, for ‘($x)sleeps(x1)’ means ‘At least one thing sleeps’, and our definition states that it is satisfied by every sequence on condition that it is satisfied by some - at least one - sequence whose first member sleeps. If there is such a sequence, then there is at least one object which sleeps.

 

The satisfaction conditions for the universal quantifier follow the same course:

 

(iii) SAT[S, ("x)jxi] iff ("S* »i S)(SAT[S*, jxi])

(In English: Sequence S satisfies ‘("x)jxiiff every sequence S* which differs from S in at most its ith place satisfies the open sentence ‘jxi’.)

 

Again, a simple instance is:

 

(iv) SAT[S, ("x)sleeps(x1)] iff ("S* »i S) (SAT[S*, sleeps(x1)]).

 

This says that a given sequence S satisfies ‘("x)sleeps(x1)’ just so long as every sequence S* which differs from S in at most its 1st place satisfies ‘sleeps(x1)’. What condition does this place on the sequences which satisfy ‘("x)sleeps(x1)’? Well, none at all: any sequence will satisfy the formula so long as all do.

 

Of course, this is just what we want. ‘("x)sleeps(x1)’ says that everything sleeps. Thus, every sequence should satisfy the open sentence ‘sleeps(x1)’, for every object s is the first member of some sequence, and so, if every sequence satisfies the open formula, every object will be a sleeper. But the sentence is false; not everything sleeps. There is, therefore, some sequence S* which differs from a given sequence S in at most its 1st place which doesn’t satisfy ‘sleeps(x1)’. For example, consider the two sequences

 

S6: <7, 7, 7,…>

S7: <1, 7, 7,…>

 

S7 differs from S6 in at most its 1st place, but S7 doesn’t satisfy ‘sleeps(x1)’; therefore, S6 doesn’t satisfy ‘("x)sleeps(x1)’.

 

Definition of truth for the predicate calculus: If S is a sentence, then S is true iff S is satisfied by every sequence, false otherwise.

 

Note 1: Truth drops out as a special case of satisfaction: the case where a sentence is satisfied by all sequences. The genius of Tarski was to see that if we are to specify recursively truth conditions for all wffs of the language, then we need a uniform semantic relation which applies to predicates, open sentences and sentences all alike. Again, we can’t directly employ truth because not all wffs are truth evaluable.

 

Note 2: We have to specify the condition ‘if S is a sentence’, for open sentences that are neither true nor false are all also satisfied by every sequence (see above).

 

Note 3: Once a variable is bound, its numerical index becomes irrelevant, for all sequences or none satisfy the containing formula. Only unbound variables differentiate between which sequences satisfy and don’t satisfy the containing formula.

 

 

Miscellany

The satisfaction conditions of complex quantified formulae follow trivially from the above, with the recursive interpretation of the constants defined over satisfaction rather than truth value:

 

(i) SAT[S, ($x)(Fxi & Gxi)] iff ($S* »i S)(SAT[S*, Fxi] & SAT[S*, Gxi]).

(ii) SAT[S, ("x)¬(Fxi Gxi)] iff ("S* »i S)(SAT[S*, Fxi] & ¬SAT[S*, Gxi]).

 

The satisfaction conditions of nested quantified formulae follow trivially from the above:

 

(iii) SAT[S, ("xi)($xj)(Fxixj] iff ("S* »i S)($S* »j S)(SAT[S*,Fxixj]).

 

The satisfaction conditions of quantified formulae with free variables follow trivially from the above:

 

(iv) SAT[S, ($xj)(Fxixj] iff ($S* »jS)(SAT[S*,Fxixj])

 

The formula here is neither true nor false: it is satisfied by all sequences whose ith member Fs something or other.

 

Validity by Way of Satisfaction

Our account of truth via satisfaction immediately allows us to characterise validity in terms of satisfaction.

 

Validity: A1, A2,… An ╞ B iff ("S)(SAT[S, A1, A2,… An] → SAT[S, B])

(In English: B follows from A1, A2,… An  iff every sequence which satisfies A1, A2,… An also satisfies B, i.e., it is not possible for all of A1, A2,… An to be satisfied by S and B not to be.)

 

Or:

 

Validity: A1, A2,… An  B iff ("S)¬(SAT[S, {A1, A2,… An} Č {¬B}])

 

 

To see this in operation, let’s turn to our Tony Blair argument, with the name ‘Tony Blair’ eliminated, replaced by the predicate ‘I’:

 

(A): ("x)(FxGx)

        ($x)(Ix & Fx)

    \($x)(Ix & Gx)

 

Assume, then, that ‘($x)(Ix & Gx)’ is satisfied by no sequence, and let ‘("x)(FxGx)’ and ‘($x)(Ix & Fx)’ be satisfied by S.

 

(i) If SAT[S,($x)(Ix & Fx)], then SAT[S*, Ixi] and SAT[S*, Fxi].

 

(ii) If SAT[S, ("x)(FxGx)], then SAT[S*, ("x)(FxGx)].

 

(iii) If SAT[S*, ("x)(FxGx)], and, from (i), SAT[S*, Fxi], then SAT[S*, Gxi].

 

(iv) Therefore, SAT[S*, Gxi].

 

(v) From (ii), SAT[S*, Ixi]

 

(vi) Therefore, SAT[S*, Ixi & Gxi]

 

(vii) If SAT[S*, Ixi & Gxi], then there is a sequence which differs from S* in at most its ith place, viz., S*.

 

(viii) By definition, SAT[S, ($x)(Ix & Gx)] iff there is a sequence S* which differs from S in at most its ith place which satisfies Ixi & Gxi.

 

(ix) Therefore, SAT[S, ($x)(Ix & Gx)], which contradicts our assumption that ‘($x)(Ix & Gx)’ is satisfied by no sequence.

 

(x) Therefore, ("x)(FxGx), ($x)(Ix & Fx) ╞  ($x)(Ix & Gx). QED.

 

In terms of proof theory (here we assume names for ease of exposition):

 

(D1)

1   (1)  ("x)(FxGx)       A

2   (2)  Fa                            A

1   (3)  FaGa                 1UE

1,2(4)            Ga                 1,2MPP

 

 

 

 

Explanation

A proof consists of lines. Each line consists of a numbered wff, the number of the assumptions on which it depends, and the rule by which we arrived at the wff. So, the first line looks like this:

 

1                         (1)                     ("x)(FxGx)       A

(number of         (line number)    (wff)                       (rule)

the assumption)

 

 

Rules

A: Insert any wff at any line. The wff depends on itself.

 

UE (Universal Elimination): If there is a universally quantified formula at some line, then at any lower line, a formula can be inserted formed by eliminating the quantifier and uniformly substituting names for variables. The derived wff depends on the assumptions on which the universal formula depends.

 

MPP (Modus Ponens): If there is ‘f’ at a line, and ‘fj’ at another line, then j can be inserted at a lower line. j depends on the assumptions upon which ‘f’, ‘fj  depend.

 

Each rule preserves truth - if the lines to which the rule applies are true, then the derived line is true.

 

Coda

Logic is not meant to be easy, but it is infinitely distant from technical trivialities. Tarski’s work, which, in essence, is reviewed above, has had a tremendous influence on many disciplines: numerous areas of mathematics, linguistics, AI, computer science, scientific modeling in all disciplines, etc. In philosophy, David Kaplan has quipped that 47% of philosophy is Tarskian.