Why is the Lucas-Penrose Argument Invalid?

Manfred Kerber

School of Computer Science
The University of Birmingham
Birmingham B15 2TT, England

www.cs.bham.ac.uk/~mmk

ABSTRACT

It is difficult to prove that something is not possible in principle. Likewise it is often difficult to refute such arguments. The Lucas-Penrose argument tries to establish that machines can never achieve human-like intelligence. It is built on the fact that any reasoning program which is powerful enough to deal with arithmetic is necessarily incomplete and cannot derive a sentence that can be paraphrased as "This sentence is not provable." Since humans would have the ability to see the truth of this sentence, humans and computers would have obviously different mental capacities. The traditional refutation of the argument typically involves attacking the assumptions of Gödel's theorem, in particular the consistency of human thought. The matter is confused by the prima facie paradoxical fact that Gödel proved the truth of the sentence that "This sentence is not provable." Adopting Chaitin's adaptation of Gödel's proof which involves the statement that "some mathematical facts are true for no reason! They are true by accident" and comparing it to a much older incompleteness proof, namely the incompleteness of rational numbers, the paradox vanishes and clarifies that the task of establishing arbitrary mathematical truths on numbers by finitary methods is as infeasible to machines as it is to human beings.

1.  Introduction

The probably most prominent and most articulate argument why the whole field of AI would be doomed to failure is expressed in the so-called Lucas-Penrose argument, which can be summarized as follows: Since Gödel proved that in each sound formal system - which is strong enough to formulate arithmetic - there exists a formula which cannot be proved by the system (assumed the system is consistent), and since we (human beings) can see that such a formula must be true, human and machine reasoning must inevitably be different in nature, even in the restricted area of mathematical logic. This attributes to human mathematical reasoning a very particular role, which seems to go beyond rational thought. Note that it is not about general human behaviour, and not even about the process of how to find mathematical proofs (which is still only little understood), but just about the checking of (finite) mathematical arguments.

There have been detailed refutations of this argument, actually so many that just a fair review of the most important ones would easily fill a full paper. For this reason I want to mention only the one by Geoffrey LaForte, Pat Hayes and Ken Ford [Laforte98], which gives explanations where Penrose goes wrong, and why his argument does not hold. LaForte et al. discuss in detail arguments which go partly back to Turing, Gödel, and Benacerraf that the assumption under which Gödel's theorem holds for a system, soundness and consistency, are not fulfilled for humans, and that we cannot know whether we are sound or consistent. This, however, would be necessary in order to establish reliably mathematical truths such as Gödel's theorem. In my view, the authors convincingly refute the Lucas-Penrose argument.

So why another paper? Mainly for two reasons, firstly this refutation - as well as the others I am aware of - leaves a paradoxical situation: the sentence which Gödel constructed in his proof, which basically can be described as "This sentence is not provable," is proved, since we know that it is true. But how can it be proved when it is not provable? Or do we not know it and it is not proved? Is it just a confusion of meta and object level? Gödel's result is still puzzling and most interpretations given in order to refute the Lucas-Penrose argument are pretty complicated. Secondly, the arguments of LaForte et al. rely on assumptions about "the nature of science in general, and mathematics in particular." While in my view these arguments are sound, they do neither go to the core of the problem since the Lucas-Penrose argument would be invalid even if these assumptions were false nor are they as easy as they should be (and as Hilbert put it, "You have not really understood a mathematical sentence until you can explain it to the first person you meet on the street in a way that they can understand it.").

The real core of the matter is the fact that there are two classes of mathematical truths, namely those which can be finitely proved (Gödel's theorem belongs to this class) and those mathematical truths which cannot be finitely proved (Gödel's theorem is about the non-emptiness of this class, but does not belong to it). The latter truths are unknowable in principle to human beings and computers alike (be they real or slightly idealized), since the ones as the others are limited by their finiteness.

In this contribution I want to relate Gödel's incompleteness theorem and the Lucas-Penrose argument (summarized in the next section) via analogy to another incompleteness theorem, namely the incompleteness of the rational numbers, which was discovered 2500 years ago and which is nowadays intuitive (described in section 3). The relation between the two discoveries is closer than one would think on first sight, they have both to do with finite representability. Exactly as most real numbers cannot be finitely represented, it is not possible to prove most true statements in a finite way (see section 4). However, approximations are possible (see section 5). Since the restrictions apply to human beings and machines alike we can conclude (in section 6) that the Lucas-Penrose argument is invalid.

2.  Gödel's Incompleteness Theorem and the Lucas-Penrose Argument

Gödel's proof of the incompleteness of logical calculi which are strong enough to deal with arithmetic can be summarized as follows: In a first step Gödel shows that provability can be defined in any logical system that is powerful enough to encode arithmetic. More precisely, if a calculus is given, then it is possible to define a binary predicate symbol "Provable(,)", such that for numbers m and n, there is an interpretation in which (semantically) m is a proof of theorem n (for short, Provable(m,n)). For instance, "There is no proof of 0=s(0)" can be expressed as ~EXISTS  m. Provable(m,|0=s(0)|). |.| is a function which maps a string injectively, concretely here the string "0 = s(0)", to a number, the so-called Gödel number, e.g. 170104021703. There are different ways how to produce such a translation and Gödel introduced one in his original paper [Goedel31]. In our case, the theorem is translated then to ~EXISTS  m. Provable(m,170104021703).

Based on this construction, Gödel enumerates all formulae with one free variable w, which are not provable, as Pk(w) <-> ~EXISTS  m.  Provable(m,|Pw(w)|) and by diagonalizing, that is, by identifying k and w: Pk(k) <-> ~EXISTS  m. Provable(m,|Pk(k)|) he constructs a formula Pk(k), which states its own unprovability. That this formula must be true can easily be seen by a contradiction proof: If it were not true, it would be provable, but - assumed that arithmetic is consistent and the calculus correct - only true theorems can be proved, hence it would be true, which is absurd.

As a consequence, Gödel can state that any theory T that is finitely describable and does contain arithmetic is necessarily incomplete. This has led to the following argument by Lucas and Penrose:

Lucas first published the argument in [Lucas61] and discussed it again in [Lucas96]. Penrose has stated it at several places back to [Penrose89], a concise description can be found in [Penrose94], from which the following quotation is taken:

"The inescapable conclusion seems to be: Mathematicians are not using a knowably sound calculation procedure in order to ascertain mathematical truth. We deduce that mathematical understanding - the means whereby mathematicians arrive at their conclusions with respect to mathematical truth - cannot be reduced to blind calculation!"

Lucas argues that the argument using the Gödel theorem is more relevant than the corresponding one using Turing's proof of the unsolvability of the halting problem, since

"it raises questions of truth which evidently bear on the nature of mind, whereas Turing's theorem does not; it shows not only that the Gödelian well-formed formula is unprovable-in-the-system, but that it is true. ... But it is very obvious that we have a concept of truth. ... A representation of the human mind which could take no account of truth would be inherently implausible." [Lucas90]

Nota bene, the argument is not about finding a proof of Gödel's theorem in the first place, which is certainly a challenge for computers, but about understanding (checking) it, a challenging problem as well, but one that has been solved. However, Lucas and Penrose will not be convinced by mechanical proofs of Gödel's theorem (as, for instance, described in [Bundy93]), since for any such system, limitations would apply which would not apply to human reasoning. But is this true? Do they really not apply to human reasoning? Paradoxical situations - let us include Gödel's discovery in them for a moment - can easily lead to very strange conclusions. If not dealt with great care, we easily manoeuvre ourselves into absurd positions as Zeno did on the base of the Achilles-and-the-Tortoise paradox.Zeno's paradox is built on a fictional race between Achilles and a tortoise, in which the tortoise is a bit ahead at the start, at time point t0. Zeno argues that Achilles has no chance ever to catch up, even him being the fastest runner of all, since when he reaches at time point t1 the position p0 where the tortoise was at t0, it will be already gone there and be at a new position p1, which Achilles can reach at a time t2, but then the tortoise has moved on to p2 and so on. At any point in time ti the tortoise is ahead of Achilles. Zeno concluded that this proves that motion is impossible. With a deep understanding of infinite series we have reached nowadays, we see where Zeno went wrong. All the time intervals under consideration add up to a finite amount of time and when that has passed, Achilles reaches the tortoise, and after that overtakes it. For many centuries this example was not only funny, but deeply confusing and led to strange conclusions like Zeno's that motion is impossible. We see that special care is advisable when concluding what follows from a paradox.

What is for instance, about a sentence like the one stated by Anthony Kenny: "John Lucas cannot consistently make this judgement"? Does this not cause similar problems for John Lucas as Gödel's theorem for a machine? Gödel himself concluded that we just cannot prove the consistency of human reasoning, while Alan Turing concluded that we cannot assume humans to be consistent. Both statements are probably true, but that is not the point with respect to the Lucas-Penrose argument. Surely we cannot even assume that our proof checking efforts are flawless.

Any mathematical reasoning about the world - and the world includes human beings and computers - makes idealizations. We can (and do) idealize humans in a way that we assume them to be attentive, never tired, not making any mistakes about proof checking, as we idealize computers as wonderful machines which never crash, are never infected by viruses, never suffer from hardware faults, and never make any mistakes. We can make these idealizations since we can strive towards corresponding ideals and approximate them (albeit never fully achieve them).

However, we should not and will not make idealizations in the following, which cannot be approximated. Such an idealization would be to deal with infinitely many objects (digits or proof steps) in finite time and would allow (computers or humans) to deal with infinitely many different cases in a finite amount of time. This could theoretically be done by speeding up and using, for instance, 1 second for the first case, 1/2 second for the second case, 1/4 for the third case, 1/8 for the fourth and so on. This way the whole process of dealing with infinitely many cases would take just 2 seconds.

Lucas and Penrose argue that human beings (and in consequence idealized human beings) can see the validity of Gödel's theorem while idealized computers (and hence real computers) cannot. I will argue that even using such idealizations (and a Turing machine is an idealization of real computers) there are things which idealized computers and idealized human beings can do and cannot do, in principle. Both can understand (and have understood) Gödel's theorem. Gödel's theorem shows that there are truths about arithmetic properties which cannot be recognized by idealized computers and idealized human beings alike. Before we go deeper into this issue let us first step back 2500 years in time, and look at another limitation which holds for humans and machines alike and produced another foundational crisis in mathematics.

3.  The Discovery of Irrational Numbers - Another Crisis in Mathematics

Greek mathematicians related the lengths of different line segments and discovered that (for all cases they investigated) they could find integers, such that the relative lengths of any two of them could be expressed by the proportion of two integers. This led to the belief which Pythagoras (570-500 BC) propagated that "All is number." In particular it meant that the proportion of any two lines could be finitely expressed, namely by a pair of two (finite) natural numbers, or as we would say as a rational number (note the dual meaning of rational, that something is a ratio, and that something makes sense).

To the big surprise of the Pythagoreans, not much later they made the discovery that their slogan is not true at all. There are two elementary ancient proofs for it, the one below ascribed to Pythagoras himself, another based on a pentagram ascribed to his student Hippasos of Metapont. It seems not clear who really made the discovery first.

The proof ascribed to Pythagoras is based on a diagonal in a square (it is a strange irony that diagonals play an important role in Pythagoras' incompleteness proof as well as in Gödel's):

1. Square with diagonal

Assume a square of length a with a diagonal d as in Fig. 1. By the Pythagorean Theorem we get d2 = a2 + a2, that is, d2 = 2 * a2. Assume the ratio d : a = p : q with p and q minimal positive integers (i.e., p and q are relatively prime). Hence, we have p2 = 2 * q2, from which we get that p must be even, hence p2 must be divisible by 4. But then q2 and in consequence q must be even as well, which is a contradiction, since we assumed p and q to be relatively prime.

The discovery that the ratio of a and d cannot be represented by two integers caused a foundational crisis in mathematics since one of the axioms, one of the "self-evident truths," on which the Pythagoreans built their mathematics, turned out to lead to a contradiction. Pairs of line segments which cannot be expressed by what we call today a rational number, are called "irrational," the same word we use for things which go beyond rational thought. Why was this dogma so much at the heart of mathematics? Why should the length of a single line in relation to an arbitrarily fixed unit have the ratio of two natural numbers? I think since it was the attempt to describe the continuum of all possible lengths in a finite way. The discovery that it cannot be done as a ratio was an indication that it is not possible to do that in general. However, it is not a proof that it is impossible to describe the length in a finite way altogether. Actually sqrt(2) (as we represent the ratio of d and a today), is finitely representable as an algebraic number, however, not as the ratio of two integers. A finite representation of sqrt(2) is, for instance, "the positive zero of the function f : x|-> x2-2."

We can imagine the shock that the discovery of irrationality caused at the time. It not only destroyed the possibility to represent all lengths as ratios of integers in a very convenient way, but it destroyed also the prospect to finitely represent the corresponding expressions. Mathematics was not far enough developed at the time to see that certain important lengths (actually all the ones we can speak about) can be finitely represented in some other way, and the discovery was not a proof that there are ratios (numbers) which are inherently inexpressible in a finite way.

It took more than 2000 years to gain a deep understanding of what was going on. The final point of this development is the strict construction of the field of real numbers, for instance, in form of Dedekind cuts [Dedekind88].

Nowadays we distinguish in particular rational, irrational, algebraic, transcendental, computable numbers, and their complements. Real numbers can be represented, for instance, as infinite decimal numbers. Let us look at some examples:

Cantor proved (also by a diagonal argument) that there are more real numbers than natural numbers, that is, that the real numbers are not countable, while rational, algebraic, and computable numbers are all countable.

There are many more numbers (uncountably many) which cannot be finitely represented, but only countably many which can. In particular, most real numbers can only be approximated, while some (and effectively most of the important) real numbers can be finitely described. For instance, the diagonal which caused so much trouble to the Pythagoreans, can be described as sqrt(2), or as the only positive zero of the function f(x) = x2-2. Just adding all those numbers which can be finitely described does not yield a complete structure, however. In order to do so it is necessary to go to the full structure of the real numbers, which include true random numbers or numbers which have a component of randomness (such as a decimal number which is constructed from a random number by interleaving its digits with 1s, e.g., 4.17151915101116111518151219171611121912... ).

While computers are much better at computing with numbers, in principle they suffer from the same limitation as humans do with respect to true random numbers. They can neither store them, nor manipulate them, but only approximate them.

4.  Finitary vs. Infinitary - Numbers, Theorems, and Proofs

What has all that to do with logic and proofs? Let us look at the closed interval [0, 1] = {x IN R | 0<= x<= 1} in dual representation (we neglect in the following the problem of the ambiguity of representation with periods of 1s that e.g. 0 point 0 period 1 is equivalent to 0 point 1 period 0). In such a representation each number corresponds in a one-to-one form to unary predicates, for instance, the predicate "even" corresponds to the number 2/3 in dual notation:

  0.   1   0   1   0   1   0   1   ...
   |  |  |  |  |  |  |  ...
    P(0)  ~ P(1)  P(2)  ~ P(3)  P(4)  ~ P(5)  P(6)  ...

This one-to-one relation holds for numbers which can be finitely represented (such as the rational number 2/3) and those which cannot be finitely represented (such as random numbers) alike. Since infinite random real numbers are not computable (we will come back to this), this shows that most predicates cannot even be finitely defined, the less we can expect that we can prove their properties in a finite way.

To go beyond incompleteness can of course not be done by adding single elements (be it sqrt(2) or adding Gödel's Pk(k) formula as an axiom). Rather we have to add all truly infinite elements to it, which are inherently not representable in a finite way (that is, which are not compressible). In the case of rational numbers this leads to the closed field of real numbers, in the case of logic this leads to the introduction of an omega-rule (see e.g., Grzegorczyk, Mostowski, and Ryll-Nardzewski in [Grzegorczyk58]). In order to understand the omega-rule let us look at proofs in the structure of natural numbers (the argument would hold for other structures as well, of course). The natural numbers are a structure, generated by 0 and the successor function s. They are represented as 0, 1, 2, 3, and so on. How can we prove a property like Goldbach's conjecture that every even number greater than 3 can be written as the sum of two prime numbers using the omega-rule?

The omega-rule has the following form:

  P(0)     P(1)     P(2)     ...     P(n)     ...
  FORALL n IN N.P(n)
Let P(n) be defined as 2* n<= 3 |  EXISTS p IN N.EXISTS q IN N.prime(p) & prime(q) &  p+q=2 * n. In order to show FORALL n IN N. P(n) we have to show P(0), P(1), P(2), P(3), P(4)... one after the other. For instance, for n = 0 and n = 1 we have 2* n <= 3, for n = 2 we find p = q = 2, for n = 3, p = q = 3, for n = 4, p = 3 and q = 5 and so on. We could go on like that (and computers did go on to big numbers to check the conjecture), but of course we could effectively never write down the full proof this way - assumed there is no counterexample. Likewise we can effectively never write down a single real number in its decimal or dual form. For rational numbers there exists an easy extension of the representation schema which allows for a finite representation by periods: we can write for the number 2/3 in dual notation: 0 point period 10. In the case of proofs there exists a constructive omega-rule which allows for a finite representation in certain cases. We will discuss this in more detail in the next section.

Note that it is important for the argument here not to make the unrealistic idealization of speeding up proofs by halving the time from one step to the next. With such an assumption infinite omega-proofs could be carried through in finite time.

As Grzegorczyk, Mostowski, and Ryll-Nardzewski show, the omega-rule fully captures the concept of structures, like the natural numbers: it simply says, in order to show that a property holds for all natural numbers (FORALL n IN N.P(n)), prove the property for each natural number P(0), P(1), P(2), P(3), P(4) ... . Note that the argument as to why the property P(n) above is true may be different for every n, e.g., P(8) since 3 + 13 = 16 with prime(3) and prime(13), and P(9) since 7 + 11 = 18 with prime(7) and prime(11). Up to now nobody has seen any pattern in the individual arguments for P(n) which could be generalized. Chaitin's reinterpretation of Gödel's incompleteness proof means that it may be that there is no such pattern. Until somebody has found a (finite) proof for the Goldbach conjecture, or found a counterexample, we cannot rule out that the Goldbach conjecture is true and that there is no finite proof which covers all cases. That is, although the Goldbach conjecture itself can be concisely stated, it may be the case that it is true for reasons which cannot be concisely stated, but which are patternless in the sense that for infinitely many n infinitely many different reasons are needed as to why P(n) is true. In this way, a program of the following form may correspond to an omega-proof (with m = 2* n):

                        m := 2;
                        bool := true;
                        repeat
                          m := m+2;
                           if
                           (not ExistPrimes(m))
                           bool := false
                        until (bool = false)
Here ExistsPrimes(m) is a computable function which tests whether there are numbers p and m-p both smaller than m and both prime numbers.

We do not know whether this program terminates (with m the smallest counterexample to the Goldbach conjecture), or runs forever. The latter would imply the correctness of the Goldbach conjecture. This example also clarifies some relationship between proving and the Halting problem. If the Halting problem were computable, questions such as Goldbach's conjecture could be decided simply by checking of whether a particular program halts or not.

The omega-rule has two advantages: Firstly, it is simple to understand, "If something is true for each single natural number, then it is true for every natural number." Secondly, it provides a complete system (in a similar sense as real numbers are complete using Dedekind cuts or Cauchy sequences). Its major disadvantage is that it does, for most cases, not allow to effectively prove things, but only to approximate the full proof. The transition is one from finitary to infinitary proofs. With this transition it is possible to get a completeness result. Note that this does not contradict Gödel's incompleteness result, since Gödel considers in his incompleteness proof a finite notion of proof only. The encoding of infinite proofs into numbers would not work (at least not by using natural numbers).

The omega-rule is hard to accept as an appropriate rule of proof, since it goes against the main idea of the concept of proof, namely that a proof has to be finite. As Hilbert put it ([Hilbert26] quoted from [Heijenoort67, p. 381f]), a proof means that we start with "certain formulas, which serve as building blocks for the formal edifice of mathematics, [... ] called axioms. A mathematical proof is an array that must be given as such to our perceptual intuition; it consists of inferences according to [a fixed] schema [... ] where each of the premisses [... ] in the array, either is an axiom or results from an axiom by substitution, or else coincides with the end formula of a previous inference or results from it by substitution. A formula is said to be provable if it is the end formula of a proof." Or as in their simplification Monty Python say in the argument clinic: "An argument is a connected series of statements intended to establish a definite proposition." Although not explicitly stated, the series of statements is assumed to be finite, and mechanically checkable by a proof checker, which can be a relatively simple computer program.

5.  Approximating the omega-Rule by Mathematical Induction and the Constructive omega-Rule

Let us first have a closer look at Gödel's theorem again. Its proof is a finite proof that Pk(k) is true. But does Pk(k) not state that it is not finitely provable? No! Here we have to be more precise. Gödel finitely proved that for any fixed finite calculus there exists a true formula Pk(k) which cannot be finitely proved within that calculus. We could well add Pk(k) as an axiom, since it is true in the standard model (we do not discuss here the issue of non-standard models of arithmetic). Then we would get a new finite calculus, which has limitations of its own. This is analogous to the situation in number theory, where the extension of the rational numbers Q(sqrt(2)), defined as the smallest field which contains the rational numbers and sqrt(2), (or Q(sqrt(n)) for a non-square number n) is a finite extension of Q, which allows for a finite description of sqrt(2), namely just sqrt(2) is such a description in Q(sqrt(2)), but Q(sqrt(2)) is not a complete field and only the transition to the field of real numbers R makes every Cauchy sequence to converge. The Pythagorean proof shows that rational numbers are not sufficient for all purposes. It turns out that finite representations, in general, are not sufficient. Similarly finite proofs are not sufficient for establishing the truth of arbitrary true formulae. Only the transition to the full omega-rule (exactly as the transition to all real numbers) results in a complete system.

Since the omega-rule is not appropriate for practical proofs in its full generality (unless we make the unrealistic assumption that we can speed up and check infinitely many steps in finite time), we need ways of approximating it. The two most prominent ones are proof by mathematical induction and proof by a constructive variant of the omega-rule.

For numbers we have quite a good understanding what can be represented finitely and we have the important notion of a computable number. For formulae we can define a similar notion of a computable truth, defined by using a constructive version of the omega-rule.

The constructive version of the omega-rule is a computable restriction of the general omega-rule in the sense that each of the P(n) in

  P(0)     P(1)     P(2)     ...     P(n)     ...
 FORALL n IN N.P(n)
can be proved in a uniform way, that is, that there is a generalized proof proof(m) with a parameter m so that if m is instantiated to a particular n, then proof(n) is a proof for P(n) for any n. For details, see for instance [Baker91]. Note that we have not such a generalized proof for our outline of a proof of Goldbach's conjecture. For each m we produce a different argument and we would need to check the validity of infinitely many arguments in order to establish the applicability of the omega-rule. If, however, we had a generalized proof, we could check the generalized (finite) proof and apply the constructive omega-rule.

The most prominent way to establish finite arguments for numbers and other structures is mathematical induction. It has been known at least back to Fermat and is a standard way to establish mathematical facts. One such rule is:

  P(0)     FORALL n IN N.P(n) ->  P(s(n))
 FORALL n IN N.P(n)

An inductive argument of the kind above can be translated to a proof using the constructive omega-rule. If we assume proofbase to be a proof for P(0) and for a particular m, proofstep(m) be a proof for P(m) ->  P(s(m)), then a proof for P(n) for any n can be constructed as: Take the base proof proofbase, set m to 0, and establish P(m). Then go through the following loop starting with m=0 and increasing m by 1 until s(m) = n: Take the proofstep(m) to establish P(m) ->  P(s(m)) and apply modus ponens on P(m) and P(m) ->  P(s(m)) to establish P(s(m)). This way, there is a uniform way to establish P(m) for any m, hence the constructive omega-rule is applicable.

That is, we can say that inductive proofs can be transformed to constructive omega-proofs, and any constructive omega-proof is a general omega-proof. While general omega-proofs can be approximated by inductive proofs, and at least as well be approximated by constructive omega-proofs, it is not possible to capture the omega-rule in general by finite means, since the arguments as to why individual expressions such as P(1), P(2), ... , P(n), ... are true, may not be generalizable, or as Gregory J. Chaitin [Chaitin98] puts it: "... some mathematical facts are true for no reason! They are true by accident!" Taking a cardinality argument, we may go a step further and say: "Most mathematical facts are true for no reason! They are true by accident!" For instance, in the case of Goldbach's conjecture, it is not clear at all that the truth of P(s(n)) can be proved from the truth of P(n), or all the previous instances P(0), P(1), P(2), ... , P(n). It may well be that the different cases are all true without any deeper connection and each case necessarily has to be established on its own from first principles.

Finite expressions, be it numbers or proofs, can be represented and fully understood by computers and humansThis is of course another simplification. Mathematical finiteness may go way beyond anything that is practical, and much that is practical for computers is not for humans.. The completion of logic to infinite systems shows that any reasoning system, be it human or artificial, is even in its idealized form subject to the limitations of finiteness, that is, certain mathematical truths cannot be proved finitely exactly as certain real number cannot be written down finitely. Gödel's proof itself is a finite proof, but it proves that finite proofs are not sufficient to capture truth in the case of systems which are powerful enough to speak about arithmetic.

The analogy between proofs and numbers can be summarized as in Table 2. In order to gain a deeper insight of what can be finitely proved we need a deeper understanding of the nature of proof than the one we have got today. The development of formal calculi corresponds to the understanding of rational numbers, the omega-rule to the understanding of real numbers. In between there is a big scope for finite proofs which go beyond traditional calculi, but avoid infinite arguments. The understanding is, however, not yet so intensively developed as in the case of numbers. We need a classification of proofs which includes the possibility for reformulation and compression of problems as well as abbreviations, extensions to an existing calculus, usage of translations and meta-proofs to establish

[t]

    Numbers   Proofs
  concrete proof   proof of irrationality of sqrt(2)   Gödel's proof
  generalized incompleteness result   incompleteness of Q   incompleteness of finite calculi
  patch to incompleteness   extend Q to Q(sqrt(2))   add the Gödel sentence as an axiom
  full completion   R , Dedekind cuts, Cauchy sequences   omega-rule
  infinite representations   decimal representations of reals   infinitely long proofs
  compression   compressed real numbers (e.g., periodic decimals, regular continued fractions   compressed finite proofs (e.g., constructive omega-rule, mathematical induction)
  randomness   random numbers   random predicates, random proofs
2. The analogy between numbers and proofs.
mathematical facts. All these methods are quite common in the mathematical colloquial language.

Even when we will have gained a much deeper understanding of the nature of proof, there will still be the limitations which lead to Gödel's incompleteness result. If we take a finitary standpoint then there will be only countably many possible (finite) proofs, but there are uncountably many true statements of arithmetic, that is, too many as that all could have a proof.Note that in full arithmetic we can make in principle uncountably many statements assumed we allow for infinite representations, that is, allow for infinite sets of formulae. For instance, we can represent a number such as pi by an infinite set of formulae as: pi={3<=pi<=4; 3.1<=pi<=3.2; 3.14<=pi<=3.15; 3.141<=pi<=3.142; ... }. If we take all such infinite sets then we get uncountably many true statements, since as for pi, we can construct such a set analogously for any other of the uncountably many real numbers. If we allow for finite sets only then we can represent only countably many true statements.

6.  Conclusion

Pythagoras had a vision, namely that it should be possible that the length of any line segment can be expressed finitely by the ratio of two integers. Like- wise Hilbert had a vision, namely that any true mathematical statement can be proved by a finite proof. It looked initially good for the two visions. The Pythagoreans found many examples for which the vision was fulfilled, however, they soon found counterexamples. Likewise Gödel's completeness result [Goedel30] was encouraging. But a year later, Gödel proved that Hilbert's vision was not achievable either. The reason why there are problems with the visions, is that there are too many real numbers and there are too many true statements.This does not reduce Gödel's theorem to a cardinality argument. However, the cardinality issue makes it plausible. As pointed out in footnote above, there uncountably many true statements about arithmetic. Gödel's theorem does not explicitly state this - in the same way as Pythagoras did not prove that there are uncountably many numbers. Adding countably many further statements as axioms - such as extending the rational numbers by sqrt(n) for all n - will not result in a complete system, however. Exactly as there are random real numbers which cannot be finitely expressed in principle (since they cannot be compressed to a finite representation), there are true statements which cannot be finitely proved in principle (since they are true for no good reason and hence their infinite omega-proofs cannot be compressed into a finite form). The limitations, namely to be able to deal with finite representations only - in computing as well as in reasoning - are limitations which hold for humans and computers alike. Gödel's theorem is about these limitations, but is not subject to them. Exactly as sqrt(2) is an indication that not all numbers can be finitely represented but it itself actually can, Gödel's theorem is an indication that not all true statements can be finitely proved, but it itself is finitely proved (as well as Pk(k)), and has been checked by humans and by computers alike.

Hence Gödel's theorem and the limitations to finiteness to which humans and computers are subject are irrelevant for settling the question whether computers can achieve human-like intelligence or not. Hence the Lucas-Penrose argument is invalid.

References


© 2005, Springer Verlag, KI 2005: Advances in Artificial Intelligence, LNAI 3698, p. 380 - 393. Manfred Kerber
The URL of this page is http://www.cs.bham.ac.uk/~mmk/papers/05-KI.html.
Also available as pdf ps.gz bib .