Principia Mathematica propositional logic
|Principia Mathematica propositional logic|
|The purpose of this page is to prove the statements of Interface:Principia Mathematica propositional logic theorems from the Principia Mathematica axioms for propositional logic.|
|Interface:Principia Mathematica propositional logic|
|Interface:Principia Mathematica propositional logic theorems|
We define some variables for well-formed formulas:
We shall now begin to derive the statements. Whitehead and Russell use a decimal numbering system of the form , where is a rational number with a small number of digits after the decimal point. Unless we give theorems our own name, we shall adopt their system for easier reference. Where we do use our own names, we sometimes give the decimal reference in a JHilbert comment.
- 1 Disjunction and implication
- 2 Conjunction
- 3 Biconditional
- 3.1 Negation
- 3.2 Biconditionalized transposition laws
- 3.3 Double negation
- 3.4 Some more deduction tools
- 3.5 Reflexive, symmetric, and transitive
- 3.6 Additional biconditional theorems
- 3.7 More theorems stated using the biconditional
- 3.8 Algebraic laws for disjunction and conjunction
- 3.9 Substitution and builders
- 3.10 Unidirectional builders
- 3.11 Distributive laws
- 3.12 De Morgan's laws
- 3.13 Biconditional and conjunction
- 3.14 Biconditionalized composition
- 3.15 Weakening of biconditional to disjunction
- 4 Case Elimination
- 5 Modus ponens and modus tollens
- 6 Tautology and contradiction
- 7 Relationships between connectives
- 8 Implication distribution over biconditional
- 9 References
Disjunction and implication
Likewise it will be convenient to have the Sum axiom as a rule:
Since Principia Mathematica defines implication, , as , the Add axiom yields the introduction of an antecedent (Whitehead and Russell call it "Simplification").
It will be convenient to have this theorem as helper rule:
Again due to the way implication is defined, Perm gives us our first transposition rule:
Next, we prove a precursor to the commutative law of Conjunction, which will be very convenient until we get Peano's transportation principle. We call this theorem Comm in accordance with Whitehead and Russell.
This theorem enables us to prove two forms of the syllogism from the Sum axiom. The rule applySyllogism expresses the syllogism nature of these theorems (we have two implications and derive a third). But they also can be used to build up more complicated formulas, a pattern which is expressed by the
addCommonConsequent rules, in which we just have one implication and derive a more involved formula.
The syllogism yields the "identity" Id, , and, by Perm, tertium non datur, which is, by our definition, just :
Actually, for Whitehead and Russell use the permutation of our TertiumNonDatur as theirs.
With the TertiumNonDatur, we can tackle double negation:
Combined with double negation, the transposition law we already have yields the remaining three:
We now have . We transpose the consequent using our transposition law
to get . All that remains to do now is to eliminate the double negation.
The proofs of the next two theorems work similar but are actually easier due to a more favourable distribution of negations:
Next, we prove the disjunction introduction laws. Introduction from the left is just the Add axiom of principia, and a permutation yields the introduction from the right.
Modus ponens law
Next, we prove a version of the modus ponens law with Assoc:
We now "repair" the extra twist in the Assoc axiom to get the actual associativity rules for disjunction:
Expression building with disjunctions
Now we prove three helpful theorems regarding expression building with disjunctions, that is, they are companions to the Sum axiom. All these proofs proceed in two steps, using a syllogism to alter the consequent of Sum.
As first step, we simply take the Sum axiom,
and prove as second step,
so that the result follows via a syllogism:
As first step, we take the Sum axiom with transposed consequent, that is ,
As second step, we prove
Combining these two steps, the result follows by transposing the consequent back.
DisjunctionSummationRR follows directly from
DisjunctionSummationRL in the same way.
The rule forms of all three are:
Implication distribution theorem (if part)
Our next goal will be to prove the if part of the implication distribution theorem, . Given that implication is defined from disjunction and negation in Principia, it would seem more natural to use de Morgan's law for disjunction negation for that. However, we don't have that yet, and in fact, implication distribution is instrumental to prove the "principle of the factor" Fact below, which in turn is required for de Morgan's law. So what we shall do instead is to install equivalences such as , and in more or less deep subexpression in a long chain of theorems until we finally arrive at the desired result.
That is, we make from .
This gives us already the result, except that the last has been duplicated, . We remove the duplicate using the Taut axiom.
This is the same as except for a permutation in the second antecedent. Since this permutation is nested four levels deep, some work is required. We begin with ,
and prove the permutation :
We can now install this permutation as an antecedent through a transposition. Then the theorem follows through a syllogism.
This is , with replaced with , plus some of the usual transformations.
This gives us , which we can connect with ,
so we get . The rest of the proof is now to effect the disjunctive associativity law in the second antecedent.
Theorem is now precisely the if part of the implicational distribution law.
Syllogism in the consequent
We prove a "syllogism in the consequent law", that is, , which is useful for syllogisms depending on a common hypothesis. We first show a modus ponens law for disjunctions.
This gives us . The theorem follows now by a summation:
Next, we prove a three terms summation law:
This gives us , so we only need to distribute the over :
Combining and , we get
from which we can immediately derive our new syllogism law:
Implication distribution theorem (only-if part)
Next we prove the only if part of the implicational distribution law (the if part of which we proved in ). We start with a simple consequence of and the converse of :
The meat of our desired result is ( ). It differs only by being partly expressed in terms of disjunction instead of implication. Each of the following thm's is just an intermediate step in proving :
From this follows the only if part of the implicational distribution law:
In this section we prove statements involving conjunction . Recall that conjunction is defined by . Our first theorem is the combination of two statements to a conjunction.
The trick is now to write as :
We also provide a commuted version:
We can prove the commutative law for conjunction from Perm using transpositions:
Next, we prove the negation of :
Next, we prove the conjunction elimination theorems.
This gives us . All we need now is a double negation of the left bracket.
Import and export
We now prove Peano's import and export principles. We begin with exportation:
To shift the bracket right, we combine transposition and the Comm axiom to get :
What remains to do now is to install a transposition of with :
Importation is simpler as the conjunction is in the consequent this time:
We can use importation to prove syllogisms in conjunction form:
Import and export also give us another transposition theorem, (which we prove in several steps):
Next, we prove Comp, the principle of composition,
Conjunction composition has an analogue for disjunction ( ), which we prove in several steps:
Principle of the factor
Finally, we prove Peano's principle of the factor, called Fact by Whitehead and Russell. It complements the Sum axiom and its companion theorems. Two consequences are partial builder theorems for conjunction and disjunction.
We prove this theorem in two steps. First we use the left hand side of the antecedent to deduce :
Now we do the same with the right hand side of the antecedent to get . The theorem then follows from a syllogism in the consequent.
This theorem follows exactly as
ConjunctionMultiplication, except that we use the Sum type theorem
DisjunctionSummationRL instead of Fact and
In Principia, the biconditional is defined simply as , so our combination and elimination theorems for conjunction immediately yield the corresponding introduction and elimination rules for the biconditional.
In order to prove the negation function theorem , we combine the two transposition laws and :
This gives us , the right hand side of which must be permuted:
Now the same again for :
Biconditionalized transposition laws
We proved various transposition laws for the conditional earlier. Now we provide some for the biconditional, starting with . The proof is a straightforward application of the conditional transposition laws, but has to prove each direction of the biconditionals.
Some more deduction tools
The next two theorems, and , enable making some deductions involving biconditionals and conjunctions. They are similar to but extend it.
The idea behind the proof of the converse of below is quite simple: becomes its own converse (modulo some double negation) when substituted with some negated terms. The rest of the proof is just getting rid of the double negation.
The next theorem, , is similar. Until we have built up more of the biconditional machinery, it will be easier to prove each implication separately. The proof is a straightforward substitution together with a commutation of the initial . As with the previous proof, most of the length of the proof consists of building up formulas to handle things like removing deeply nested double negation, a process which will get (somewhat) easier later.
Reflexive, symmetric, and transitive
The biconditional has these three properties (which correspond to those defining an equivalence relation).
The proof of biconditional transitivity is somewhat more complicated. The antecedent contains essentially four disjunctions factors. Each of them has to be picked out and applied:
Additional biconditional theorems
Another easy builder theorem:
More theorems stated using the biconditional
Some more theorems where we have proved implications in both directions, but can now express them using the biconditional:
Algebraic laws for disjunction and conjunction
Some of the theorems of the propositional calculus can be thought of as analagous to those of other algebras, showing properties such as commutivity and associativity. Although Whitehead and Russell think this concept was overemphasized in their day, they do provide theorems which represent algebraic properties.
Idempotence for disjunction and conjunction are perhaps the most interesting, as they cause the biggest differences between this algebra and many other algebras:
To prove conjunction idempotence, we first catch up on a few basic implication theorems we haven't needed until now:
That gives us and we just need to eliminate the extra :
Idempotence is also expressed in the following rules.
We already have commutativity of disjunction and conjunction, but just need to express them using the biconditional:
Both disjunction and conjunction are associative:
The link between (which has some implications and negations) and the conjunctions in ConjunctionAssociativity may not be apparent, but follows from the definitions of conjunction and implication.
We already provided rules for associating disjunctions; here are the corresponding ones for conjunctions:
Substitution and builders
, then we want to be able to substitute
in a theorem to get a new theorem. The mechanism which we are working towards, in Interface:Classical propositional calculus, is provided by
buildBiconditional. Those rules do not eliminate the need for a proof to build up the expressions embodying the substitution, but they reduce the process of constructing such a proof to a familiar (if perhaps tedious) pattern. We already proved
removeNegation, and we're now ready to prove the rest.
The proof proceeds by expanding into four implications, rearranging them using associativity and commutativity, and applying ConjunctionMultiplication to each half.
First, the rearrangement we need is
ConjunctionFunction-1, known as
an4 in metamath.
This is just like the theorem for conjunction, except that we build on the partial builder theorem DisjunctionSummation instead of ConjunctionMultiplication:
The builder for implication is a simple consequence of the builder for disjunction together with the equivalence of and .
To prove the biconditional builder, we need . As this is an equivalence of conjunctions, we'll get it with the conjunction builder. The equivalences needed to apply the conjunction builder will come from the implication builder (and conjunction commutativity in one of the two directions).
The builders which we just proved start with biconditionals. If we only have implications, there is a similar set of builders (which, of course, only provide implications, not biconditionals, in the consequent). Here we summarize the ones we have already proved, and prove a few more.
, is a unidirectional negation builder.
The disjunction summation theorem, , is the general form of the unidirectional disjunction builder.
We also provide convenience theorems for cases in which one of the implications is simply
, and where there might be a commutation in one of the disjunctions. All of these are already proved (
disjoinLR, and so on), so the only thing we need to do here is provide
DisjunctionSummationLL as a new name for the Sum axiom.
The conjunction multiplication theorem, , is the general form of the unidirectional disjunction builder.
Here we add convenience theorems for cases in which one of the implications is simply , and where there might be a commutation in one of the conjunctions.
ConjunctionMultiplicationRR is just a new name for the Fact theorem; the rest could either be proved from Fact and commutativity, or as special cases of
The first two unidirectional implication builders came early on,
The general unidirectional implication builder would be .
There is no unidirectional builder for the biconditional.
We prove two distributive laws. The first one, , is analogous to the distributive law in well-known algebras such as the real numbers, (if one thinks of conjunction as being like multiplication and disjunction as being like addition).
The second distributive law, , has no analogue in ordinary algebra.
We also supply commuted versions of both laws and some rules:
De Morgan's laws
Since the definition of conjunction in Principia is based on De Morgan's laws, the laws themselves are not hard to prove.
Biconditional and conjunction
A true conjunct does not affect the truth of a proposition, or in symbols
q → (p ↔ p ∧ q).
Before we prove that statement itself, we prove two equivalences involving implications and conjunctions. We will be using them in the proof of the result stated above.
We start with the forward implication. We stick
p → p on the proof stack, and then start with
(p → p) ∧ (p → q) → (p → p ∧ q)
Exporting and detaching
p → p finishes the forward implication.
The reverse direction is even easier.
We first stick two things on the stack for later use.
Now we start with
(p ∧ q → p) → ((p → p ∧ q) → (p ↔ p ∧ q)), and then detach the antecedent (a theorem) to give
(p → p ∧ q) → (p ↔ p ∧ q)
The converse of that statement,
(p ↔ p ∧ q) → (p → p ∧ q), is even simpler. So combining the two, we get
(p → p ∧ q) ↔ (p ↔ p ∧ q)
The only thing left is to combine with
(p → q) ↔ (p → p ∧ q) which we left on the proof stack.
We earlier proved composition laws for disjunction and conjunction:
The converses, while less interesting, are also true, and we prove them now.
Weakening of biconditional to disjunction
We've already dealt with weakening the biconditional to an implication. This section just has the same theorems, or slight variations thereof, phrased in terms of disjunction instead of implication.
Proofs often show that one of several cases must apply, and then prove the desired proposition for each case. Here we provide one form of this, where there are two cases: and . Note that in principle, it is always possible to reduce the handling of multiple cases to repeated handling of two cases.
Modus ponens and modus tollens
Now that we have import, we can derive the version of the modus ponens law which Interface:Classical propositional calculus expects:
Modus tollens is just a combination of modus ponens and transposition.
Tautology and contradiction
Interface:Classical propositional calculus gives the name
(p ∨ (¬ p)) ↔ (⊤) and
(p ∧ (¬ p)) ↔ (⊥). They are somewhat more subtle than they appear (and in particular are not just trivial consequences of our definitions of
⊥), because the variable which appears in the definition of
⊥ is not the same as the one in the
Contradiction theorem. We therefore prove them as consequences of the notion that two true statements are equivalent (
) or that two false statements are equivalent (
Along the same lines is .
Relationships between connectives
Here we express implication in terms of disjunction, biconditional in terms of implication, etc.
Biconditional and implications
These are all straightforward because we define the biconditional as a conjunction of two implications.
Biconditional as disjunction of two conjunctions
One way of looking at is "both p and q are true, or neither are true", or in symbols, . We prove this via a fairly long string of simpler propositions.
At this point we have . We'll come back to that, after we prove :
Now we have and on the proof stack. It is now enough to join those with a conjunction and apply the distributive law:
At this point we have , and we just need to apply DeMorgan's law and introduce double negation to get .
We're much closer than it may appear (because JHilbert automatically applies definitions), but other than two applications of DeMorgan's law, it is just the definitions of implication and biconditional: from the definition of implication yields
Now we have and hence by the definition of the biconditional. A final appeal to the definition of implication yields .
One way to see is as a relationship between the biconditional and the exclusive or. One way to express an exclusive or is , and seen this way, states that the biconditional is the negation of the exclusive or:
At this point we have (applying the definition of implication) and we need (applying the definition of conjunction) . So we just need to fix the double negation.
We now have , so we just need to fix the double negation and commute the second conjunction.
Biconditional as conjunction of two disjunctions
This one follows immediately from our definitions of biconditional and implication, and commutativity.
Implications and disjunctions
The relationship between implication and disjunction is just our definition of implication, or an easy consequence thereof.
Implication distribution over biconditional
Antecedent distribution says that we can distribute the antecedent in a formula of the form
p → (q → r). Here we prove a similar result for
p → (q ↔ r).
(p → q) ↔ (p → r) into two implications:
Then we apply
AntecedentDistribution to each one,
and combine them.
The left hand side from buildConjunction was
((p → q) → (p → r)) ∧ ((p → r) → (p → q)), so we are ready to apply transitivity there.
The right hand side from buildConjunction was
(p → (q → r)) ∧ (p → (r → q)), which we first transform to
p → ((q → r) ∧ (r → q)),
and then to
p → (q ↔ r).
We now have our desired result except the two sides are interchanged.
That's it! We're ready to export our theorems to Interface:Principia Mathematica propositional logic theorems. That interface also requires us to define the alias
We also export Interface:Law of the excluded middle, just to emphasize that the law of the excluded middle is a theorem of classical propositional logic.
- A. Whitehead, B. Russell, Principia Mathematica, Cambridge University Press, 1910.
- Whitehead and Russell, loc. cit., p. 120
- Whitehead and Russell, loc. cit., p. 121
- Whitehead and Russell, loc. cit., p. 120
set.mm, metamath.org, accessed February 15, 2010
set.mm, metamath.org, accessed February 15, 2010
- Whitehead and Russell, loc. cit., p. 124