The Liar Paradox is simple enough to explain – is the following statement true or false?
This statement is false.
If it’s true then it’s false, but if it’s false then it’s true … nothing works.
In my not-so-humble opinion, most (maybe all) paradoxes are the last step in a proof by contradiction that some unstated assumption is false.
In this case, the assumption is that the above statement is meaningful – is either true or false. The assumption is false, the statement is meaningless. End of paradox.
Of course, there’s more to it than that. Behind the Liar Paradox is a more general, and seemingly sensible assumption, that any statement that is syntactically correct is meaningful. Obviously, not the case. Here’s another example
I do not believe this statement.
If I believe it, then I don’t, and if I don’t, then I do.
It’s tempting to believe that self reference is the problem, but there are plenty of self referential sentences that are (or seem … ) meaningful and true; e.g. “I know this sentence is true”.
To get to the bottom of this we need to formalize the paradox. This was first done by the famous logician Alfred Tarski (in 1936). In his formalization, the problem is the phrase “is true”.
More than 80 years later you can explain it without getting too technical. Imagine we have a formal logical language with quantifiers, variables, Boolean connectives, arithmetic operations and (this really helps) strings and string operations. Call this language ℒ. At this stage everything syntactically correct makes sense. For example, we can state that string concatenation is associative, or that multiplication distributes over addition.
Since we have strings, we can talk about expressions and formulas in the language itself. We can define a predicate (of strings) that is true iff the string is a syntactically correct formula. We can define an operation “subs” that yields the result of substituting an expression for a variable; more precisely, subs(f,g) is the result of substituting g for every occurrence of x in f. So far, no problem. Can we produce a formula that refers to itself? Not yet.
Gödel numbers? No need. The whole point of Gödel numbering is to show that you don’t need strings, you can represent them as (arbitrarily large) integers. This is important but not particularly interesting. In modern computer science terms, it means implementing strings as (arbitrarily long) integers, and nowadays (but not in the 30’s) everyone believes this without seeing the details.
So far so good. One last little step … and we go over the cliff. The last step is to add a predicate T of strings that says it’s argument is a formula and that this formula is true (with free variables universally quantified). T seems harmless enough, but with it we can reproduce the Liar Paradox.
Provided we can make a sentence refer to itself. This, not Gödel numbering, is the tricky part.
Since ℒ+ has strings, subs, and T, we can talk about whether or not a formula is true of itself (as a string). If a formula is not true of itself (it ‘rejects’ itself) let’s call it neurotic.
To see that we can define neurosis, let’s say that a formula Φ is true of a formula Θ iff Φ is true when all occurrences of the variable x in Φ are replaced by Θ (as a string constant). If we call the result of this substitution Φ[Θ], then to say that Φ is true of Θ is to say that Φ[Θ] is true.
Then let Ψ be the sentence
¬T(subs(x,x))
It should be clear that Ψ says that its argument is neurotic. What about Ψ, is it neurotic? Is Ψ[Ψ] true or false?
On the one hand, if it’s false, then by definition of neurosis Ψ is neurotic. But since Ψ tests for neurosis, Ψ[Ψ] should be true. On the other hand, if Ψ[Ψ] is true, then since Ψ tests for neurosis, Ψ is neurotic. But this means by the definition of neurosis, Ψ is not neurotic. No way out. (You may recognize this as a variant of the “barber who shaves all those who don’t shave themselves” paradox.)
Thus Ψ[Ψ] is our liar sentence. I can tell you exactly what it is; it’s
¬T(subs(“¬T(subs(x,x))”,”¬T(subs(x,x))”))
and is, by my count, 41 characters long.
We can make the argument clearer (if not as precise) using our functional shorthand. We define Ψ by the rule
Ψ[Φ] = ¬Φ[Φ]
Then
Ψ[Ψ] = ¬Ψ[Ψ]
Those who are familiar with the λ calculus or combinatory logic will detect the Y combinator behind this argument. The combinator Y is, as a λ-expression,
λF (λx F(x x))(λx F(x x))
It’s called a fixed point combinator because YF reduces to F(YF); YF is a fixed point of F. The ISWIM (where -clause) version is much easier to understand:
G(G) where G(x) = F(x(x))
Working back from this contradiction, it means we can’t consistently add a truth predicate to our basic language ℒ. That in turn means that we can’t define T in ℒ, otherwise the ℒ would be inconsistent. That’s what Tarski meant when he called his result “the undefinability of truth”.
Can we salvage anything from this? Yes, and this is due to Tarski and Saul Kripke.
There is no harm in applying T to formulas that don’t use T, the meaning is obvious. Call the language allowing this ℒ ‘. Similarly, applying T to ℒ ‘ formulas is ok, call the language where this is allowed as well ℒ ”. We can create a sequence ℒ, ℒ ‘, ℒ ”, ℒ ”’, … (This is Tarski’s hierarchy).
We can throw these all together producing a language ℒ *. But then we can create ℒ*’, ℒ*” etc. Generalizing this we have a hierarchy indexed by the countable ordinals (don’t ask). Kripke’s proposal was to define a single language with a single truth predicate in which anything goes but in which sentences not caught up in this construction have an intermediate truth value. Thus Ψ[Ψ] would be neither -1 (false) nor +1 (true) but 0, with 0 being its own negation. I’ll let you decide whether this makes Ψ[Ψ] meaningful after all.
Sentences that have a conventional truth value Kripke calls grounded; those, like the liar sentence, ungrounded. You can think of the ungrounded sentences as those in which evaluation fails to terminate. Notice that “this statement is true” is ungrounded. (Kripke found a way around this but I won’t go into the details.)
Finally, infinitesimal logic can shed some light on groundedness. If we redefine T(f) to be the truth value of f times 𝛆, and evaluate over the infinitesimal truth domain
-1, -𝛆, -𝛆2, -𝛆3, … 0 … 𝛆3, 𝛆2, 𝛆, 1
then we get a more nuanced result. The power of the infinitesimal tells us roughly how many layers of truth predicate we have to go through to decide between true and false.
Hi Bill, nice article! Question: have you considered that rather than being considered a paradox, your opening statements are instead tautology?
Martin