consciousness science

Why Roger Penrose is Wrong

In his book "Shadows of the Mind", Roger Penrose details an argument based on Godel's Incompleteness theorem to prove that the mathematical capabilities of human mathematicians are non-computable.

Godel's Incompleteness Theorem

Here is a brief description and proof of Godel's Incompleteness Theorem.

F can be interpreted as a theorem prover

We can regard F as representing a set of logical axioms and rules of proof in a mathematical theory that enables us to prove theorems about the non-termination of computer programs. Thus an execution of F(X,Y) is a "proof" of the theorem that X(Y) does not terminate. The system F is incomplete, in that it does not prove all statements about non-termination which are true, and F can be explicitly extended to prove a larger set of true statements. Of course the extended F is still incomplete (for exactly the same reason) and can itself be extended, and so on.

Interpreting F as a robot mathematician

We can regard F as an algorithm that specifies the operation of a robotic mathematician tasked with proving theorems about non-terminating programs. Godel's Incompleteness theorem tells us that merely by inspecting the design of a such a robot, we can "know" something that the robot does not know, i.e. that G(G) as defined above is non-terminating.

Interpreting F as a description of the capabilities of a human mathematician

Let us make the following assumptions -

  1. Human beings operate according to the laws of physics.
  2. The laws of physics are computable, i.e. the behaviour of all physical systems can be predicted using algorithmic calculations.
  3. From the behaviour of a human mathematician we can extract reliably correct theorems about non-terminating programs (e.g. when the mathematician writes a theorem down, submits it to a respected journal, and states when asked that they are "really, really sure" that the proof of their theorem is correct, then we assume the theorem is reliably correct).

From these assumptions we come to the conclusion that there is some algorithm F which is equivalent to the ability of a human mathematician to state correct theorems about non-terminating programs.

But we can derive G from F, as above, and know that G(G) is a non-terminating program.

But the reader is a human, so we have just proved that a human can know something that a human cannot know.

This is a contradiction, so one of the initial assumptions is wrong.

Roger Penrose wants to come to the conclusion that assumption 2 is incorrect. Some of his opponents challenge assumption 3, i.e. the infallibility of human mathematicians.

I am willing to concede that assumption 3 is correct, for the sake of argument, with the caveat that it may cause difficulties in practice if an attempt is made to carry out procedures based on that assumption. I want to concede the correctness of assumption 3, because I believe that the real fallacy in Penrose's argument lies elsewhere.

The Fallacy: Extra Hints

To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.

I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -

The human mathematician is given X=G and Y=G, with G defined such that G(x)=F(x,x) where F is the function that describes the mathematician's mathematical ability. The mathematician does not know that F is the function that describes their mathematical ability, unless we tell them so. If they did recognise F, then their understanding of the Godelian argument would allow them to determine that G(G) derived from F is non-terminating.

If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.

We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability.

Is this a practical procedure for increasing human mathematical ability?

If we ignore the difficulties of describing a mathematician's brain precisely enough to be able to formulate F, and we also ignore the question of infallibility of human mathematical ability, then the answer is "yes". There are two things that can happen to a mathematician's understanding of mathematics after they learn of an F that describes their mathematical ability:

  1. F is simple enough that the mathematician can commit it to memory without sacrificing any of their existing mathematical ability. In which case the mathematician's ability to prove theorems entirely on their own has definitely increased.
  2. F is so large that the mathematician cannot commit it to memory. Their mathematical ability is only increased if they are allowed to retain a copy of F on some external medium and refer to it as necessary.

The fact that the F that bounds a mathematician's mathematical ability can be increased just by giving them a (very large) bit of paper with something written on it alerts us to the necessity of being very specific about what resources are available to a mathematician when we attempt to determine an algorithmic bound on the mathematical ability of that mathematician.

What about the mathematical ability of all human mathematicians combined?

It's just the same argument, but with a different starting point.

What if we gave the mathematician an apparatus for analysing the construction of their own brain, so that they could derive F without any outside help?

Yet another starting point. The fallacy is obvious if we apply this improvement to a robot mathematician. The derived F will take account of everything except the existence of the analysing apparatus.

An exciting new mathematical journal, to be published soon !!

One implication of Roger Penrose's theory is that human mathematicians are capable of producing an un-ending sequence of mathematical truths unbounded by any algorithmic procedure. Thus I have started the Algorithmically Unbounded Journal of Mathematical Truths.

Licence

This web page is licensed under the Gnu Free Documentation Licence.