In his book "Shadows of the Mind", Roger Penrose details an argument based on Godel's Incompleteness theorem to prove that the mathematical capabilities of human mathematicians are non-computable.
Here is a brief description and proof of Godel's Incompleteness Theorem.
We can regard F as representing a set of logical axioms and rules of proof in a mathematical theory that enables us to prove theorems about the non-termination of computer programs. Thus an execution of F(X,Y) is a "proof" of the theorem that X(Y) does not terminate. The system F is incomplete, in that it does not prove all statements about non-termination which are true, and F can be explicitly extended to prove a larger set of true statements. Of course the extended F is still incomplete (for exactly the same reason) and can itself be extended, and so on.
We can regard F as an algorithm that specifies the operation of a robotic mathematician tasked with proving theorems about non-terminating programs. Godel's Incompleteness theorem tells us that merely by inspecting the design of a such a robot, we can "know" something that the robot does not know, i.e. that G(G) as defined above is non-terminating.
Let us make the following assumptions -
From these assumptions we come to the conclusion that there is some algorithm F which is equivalent to the ability of a human mathematician to state correct theorems about non-terminating programs.
But we can derive G from F, as above, and know that G(G) is a non-terminating program.
But the reader is a human, so we have just proved that a human can know something that a human cannot know.
This is a contradiction, so one of the initial assumptions is wrong.
Roger Penrose wants to come to the conclusion that assumption 2 is incorrect. Some of his opponents challenge assumption 3, i.e. the infallibility of human mathematicians.
I am willing to concede that assumption 3 is correct, for the sake of argument, with the caveat that it may cause difficulties in practice if an attempt is made to carry out procedures based on that assumption. I want to concede the correctness of assumption 3, because I believe that the real fallacy in Penrose's argument lies elsewhere.
To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.
I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -
The human mathematician is given X=G and Y=G, with G defined such that G(x)=F(x,x) where F is the function that describes the mathematician's mathematical ability. The mathematician does not know that F is the function that describes their mathematical ability, unless we tell them so. If they did recognise F, then their understanding of the Godelian argument would allow them to determine that G(G) derived from F is non-terminating.
If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.
We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability.
If we ignore the difficulties of describing a mathematician's brain precisely enough to be able to formulate F, and we also ignore the question of infallibility of human mathematical ability, then the answer is "yes". There are two things that can happen to a mathematician's understanding of mathematics after they learn of an F that describes their mathematical ability:
The fact that the F that bounds a mathematician's mathematical ability can be increased just by giving them a (very large) bit of paper with something written on it alerts us to the necessity of being very specific about what resources are available to a mathematician when we attempt to determine an algorithmic bound on the mathematical ability of that mathematician.
It's just the same argument, but with a different starting point.
Yet another starting point. The fallacy is obvious if we apply this improvement to a robot mathematician. The derived F will take account of everything except the existence of the analysing apparatus.
One implication of Roger Penrose's theory is that human mathematicians are capable of producing an un-ending sequence of mathematical truths unbounded by any algorithmic procedure. Thus I have started the Algorithmically Unbounded Journal of Mathematical Truths.
This web page is licensed under the Gnu Free Documentation Licence.