Gödel's last theorem. Confession of a great logician


One of the most famous theorems of mathematical logic, lucky and unlucky at the same time. In this it is similar to Einstein's special theory of relativity. On the one hand, almost everyone has heard something about them. On the other hand, in the popular interpretation, Einstein's theory, as you know, "says everything in the world is relative". And Gödel's incompleteness theorem (hereinafter simply TGN), in an approximately equally free folk formulation, "proves that there are things incomprehensible to the human mind". And so some try to adapt it as an argument against materialism, while others, on the contrary, prove with its help that there is no God. It's funny not only that both sides cannot be right at the same time, but also that neither one nor the other bothers to figure out what, in fact, this theorem says.

So what? Below I will try to "on the fingers" to talk about it. My exposition will, of course, not be rigorous and intuitive, but I will ask mathematicians not to judge me strictly. It is possible that for non-mathematicians (which, in fact, I also belong to), there will be something new and useful in what is told below.

Mathematical logic is indeed a rather complicated science, and most importantly, not very familiar. It requires careful and strict maneuvers, in which it is important not to confuse the really proven with the fact that "it's already clear." However, I hope that in order to understand the following “outline of the proof of TGN”, the reader will need only knowledge of school mathematics / computer science, logical thinking skills and 15-20 minutes of time.

Simplifying somewhat, TGN asserts that in sufficiently complex languages ​​there are unprovable propositions. But in this phrase, almost every word needs an explanation.

Let's start by trying to figure out what a proof is. Let's take some school problem in arithmetic. For example, let it be required to prove the correctness of the following uncomplicated formula: "" (I remind you that the symbol is read "for any" and is called the "universal quantifier"). It can be proved by identically transforming, say, like this:


The transition from one formula to another occurs according to certain well-known rules. The transition from the 4th formula to the 5th occurred, say, because every number is equal to itself - such is the axiom of arithmetic. And the whole procedure of proving, thus, translates the formula into the boolean value TRUE. The result could be FALSE - if we refuted some formula. In this case, we would prove its negation. It is possible to imagine a program (and such programs are actually written) that would prove such (and more complex) propositions without human intervention.

Let's state the same thing a little more formally. Suppose we have a set consisting of strings of characters of some alphabet, and there are rules by which a subset of the so-called statements- that is, grammatically meaningful phrases, each of which is true or false. We can say that there is a function that matches statements from one of two values: TRUE or FALSE (that is, maps them to a Boolean set of two elements).

Let's call such a pair - a set of statements and a function from to - "language of statements". Note that in the everyday sense the concept of language is somewhat broader. For example, the Russian phrase "Well, come here!" is not true and not false, that is, from the point of view of mathematical logic, it is not a statement.

For what follows, we need the notion of an algorithm. I will not give its formal definition here - this would lead us quite far aside. I will limit myself to the informal: "algorithm"- this sequence of unambiguous instructions ("program"), which in a finite number of steps converts input data into output. The italicized is fundamentally important - if the program gets hung up on some initial data, then it does not describe the algorithm. For simplicity and in application to our case, the reader can consider that an algorithm is a program written in any programming language known to him, which, for any input data from a given class, is guaranteed to complete its work with a Boolean result.

Let us ask ourselves: is there a “proving algorithm” for every function (or, in short, "deductive") equivalent to this function, that is, translating each statement into exactly the same boolean value as it? More concisely, the same question can be formulated as follows: is every function over a set of propositions computable? As you can already guess, it follows from the validity of TGN that no, not any - there are non-computable functions of this type. In other words, not every true statement can be proven.

It may very well be that this statement will cause you an internal protest. This is due to several circumstances. Firstly, when we are taught school mathematics, sometimes there is a false impression that the phrases “the theorem is true” and “it is possible to prove or verify the theorem” are almost identical. But if you think about it, it's not at all obvious. Some theorems are proved quite simply (for example, by enumeration of a small number of options), and some are very difficult. Consider, for example, Fermat's famous Last Theorem:


the proof of which was found only three and a half centuries after the first formulation (and it is far from elementary). It is necessary to distinguish between the truth of a statement and its provability. It does not follow from anywhere that there are no true, but unprovable (and not fully verifiable) statements.

The second intuitive argument against TGN is more subtle. Suppose we have some unprovable (within the framework of this deductive) statement. What prevents us from accepting it as a new axiom? Thus, we will slightly complicate our system of proofs, but this is not terrible. This argument would be perfectly correct if there were a finite number of unprovable propositions. In practice, the following may happen - after postulating a new axiom, you will stumble upon a new unprovable statement. Take it as another axiom - you will stumble upon the third. And so on ad infinitum. They say deductica will stay incomplete. We can also take forceful measures so that the proving algorithm ends after a finite number of steps with some result for any statement of the language. But at the same time, he will begin to lie - lead to the truth for incorrect statements, or to lies - for the faithful. In such cases it is said that the deductive contradictory. Thus, one more formulation of TGN sounds like this: “There are propositional languages ​​for which complete consistent deductics is impossible” - hence the name of the theorem.

Sometimes called "Gödel's theorem" is the statement that any theory contains problems that cannot be solved within the framework of the theory itself and require its generalization. In a sense this is true, although such a formulation obscures the issue rather than clarifies it.

I also note that if we were talking about the usual functions that map the set of real numbers to it, then the “non-computability” of the function would not surprise anyone (just do not confuse “computable functions” and “computable numbers” - these are different things). Any schoolchild knows that, say, in the case of a function, you must be very lucky with the argument so that the process of calculating the exact decimal representation of the value of this function ends in a finite number of steps. And most likely you will calculate it using an infinite series, and this calculation will never lead to an exact result, although it can come close to it - simply because the value of the sine of most of the arguments is irrational. TGN simply tells us that even among functions whose arguments are strings and whose values ​​are zero or one, non-computable functions, although arranged in a completely different way, also exist.

For what follows, we will describe the "language of formal arithmetic". Let us consider a class of text strings of finite length, consisting of Arabic numerals, variables (letters of the Latin alphabet) taking natural values, spaces, signs of arithmetic operations, equality and inequality, quantifiers (“exists”) and (“for any”) and, perhaps, , some other symbols (their exact number and composition are unimportant for us). It is clear that not all such lines are meaningful (for example, "" is nonsense). The subset of meaningful expressions from this class (that is, strings that are true or false in terms of ordinary arithmetic) will be our set of statements.

Examples of formal arithmetic statements:


etc. Now let's call a "formula with a free parameter" (FSP) a string that becomes a statement if a natural number is substituted into it as this parameter. Examples of FSP (with parameter):


etc. In other words, FSPs are equivalent to functions of a natural argument with a Boolean value.

Denote the set of all FSPs by the letter . It is clear that it can be ordered (for example, we first write out one-letter formulas ordered alphabetically, followed by two-letter formulas, etc.; it is not fundamental for us which alphabet the ordering will take place in). Thus, any FSP corresponds to its number in the ordered list, and we will denote it .

Let us now turn to a sketch of the proof of TGN in the following formulation:

  • For the propositional language of formal arithmetic, there is no complete consistent deduction.

We will prove by contradiction.

So let's assume that such a deductive exists. Let's describe the following auxiliary algorithm that assigns a boolean value to a natural number as follows:


Simply put, the algorithm results in the value TRUE if and only if the result of substituting into the FSP its own number in our list gives a false statement.

Here we come to the only place where I will ask the reader to take my word for it.

Obviously, under the above assumption, any FSP from can be associated with an algorithm containing a natural number at the input and a Boolean value at the output. Less obvious is the opposite:


The proof of this lemma would require at least a formal, not an intuitive, definition of the notion of an algorithm. However, if you think about it a little, it is quite plausible. Indeed, algorithms are written in algorithmic languages, among which there are such exotic ones as, for example, Brainfuck , which consists of eight one-character words, in which, nevertheless, any algorithm can be implemented. It would be strange if the richer language of formal arithmetic formulas that we have described would turn out to be poorer - although, no doubt, it is not very suitable for ordinary programming.

After passing this slippery place, we quickly get to the end.

So, we have described the algorithm above. According to the lemma I asked you to believe, there exists an equivalent FSP. It has some number in the list - let's say . Let's ask ourselves, what's the point? Let it be TRUE. Then, according to the construction of the algorithm (and hence the function equivalent to it), this means that the result of substituting a number into the function is FALSE. The opposite is checked in the same way: from FALSE follows TRUE. We have arrived at a contradiction, which means that the original assumption is wrong. Thus, for formal arithmetic, there is no complete consistent deduction. Q.E.D.

Here it is appropriate to recall Epimenides (see the portrait in the title), who, as you know, declared that all Cretans are liars, being himself a Cretan. In a more concise formulation, his statement (known as the "liar paradox") can be formulated as: "I'm lying." It is precisely such a statement, which itself proclaims its falsity, that we have used for the proof.

In conclusion, I want to note that TGN does not claim anything particularly surprising. After all, everyone has long been accustomed to the fact that not all numbers can be represented as a ratio of two integers (remember, this statement has a very elegant proof that is more than two thousand years old?). And the roots of polynomials with rational coefficients are also not all numbers. And now it turned out that not all functions of a natural argument are computable.

The sketch of the proof given was for formal arithmetic, but it is not difficult to see that THN applies to many other propositional languages ​​as well. Of course, not all languages ​​are like that. For example, let's define a language like this:

  • "Any phrase in the Chinese language is a true statement if it is contained in Comrade Mao Tse Tung's quote book, and is incorrect if it is not contained."

Then the corresponding complete and consistent proving algorithm (it can be called "dogmatic deductive") looks something like this:

  • “Flip through Comrade Mao Tse Tung’s quote book until you find the statement you are looking for. If it is found, then it is true, and if the quote book is over, and the statement is not found, then it is false.

Here we are saved by the fact that any citation is obviously finite, so the process of "proving" will inevitably end. Thus, TGN is inapplicable to the language of dogmatic statements. But we were talking about complex languages, right?

Tags: Add tags

Ecology of life. Science and Discovery: Gödel's Incompleteness Theorem, one of the most famous theorems in mathematical logic, is lucky and unlucky at the same time. In this it is similar to Einstein's special theory of relativity. On the one hand, almost everyone has heard something about them. From another interpretation, Einstein's theory "says that everything in the world is relative."

Theorem Gödel on incompleteness, one of the most famous theorems in mathematical logic, was lucky and unlucky at the same time. In this it is similar to Einstein's special theory of relativity.

On the one hand, almost everyone has heard something about them. On the other hand, in folk interpretation Einstein's theory, as is known, " says everything in the world is relative". BUT Gödel's incompleteness theorem(hereinafter simply TGN), in approximately the same free folk formulation, " proves that there are things incomprehensible to the human mind».

And now some are trying to adapt it as an argument against mat erialism , while others, on the contrary, prove with its help, that there is no god . It's funny not only that both sides cannot be right at the same time, but also that neither one nor the other bothers to figure out what, in fact, this theorem says.

So what? Below I will try to "on the fingers" to talk about it. My exposition will, of course, not be rigorous and intuitive, but I will ask mathematicians not to judge me strictly. It is possible that for non-mathematicians (which, in fact, I also belong to), there will be something new and useful in what is told below.

Mathematical logic is indeed a rather complicated science, and most importantly, not very familiar. It requires careful and strict maneuvers, in which it is important not to confuse the really proven with the fact that "it's already clear." However, I hope that in order to understand the following “outline of the proof of TGN”, the reader will need only knowledge of school mathematics / computer science, logical thinking skills and 15-20 minutes of time.

Simplifying somewhat, TGN claims that in sufficiently complex languages ​​there are unprovable statements. But in this phrase, almost every word needs an explanation.

Let's start by trying to figure out what a proof is. Let's take some school problem in arithmetic. For example, let it be required to prove the correctness of the following simple formula: “∀x(x−1)(x−2)−2=x(x−3)” (recall that the symbol ∀ is read “for any” and is called the “universal quantifier” ). It can be proved by identically transforming, say, like this:

    ∀x(x−1)(x−2)−2=x(x−3)

    ∀xx2−3x+2−2=x2−3x

    ∀xx2−3x–x2+3x=0

    ∀x0=0

    TRUE

The transition from one formula to another occurs according to certain well-known rules. The transition from the 4th formula to the 5th occurred, say, because every number is equal to itself - such is the axiom of arithmetic. And the whole procedure of proving, thus, translates the formula into the boolean value TRUE. The result could be FALSE - if we refuted some formula. In this case, we would prove its negation. It is possible to imagine a program (and such programs are actually written) that would prove such (and more complex) propositions without human intervention.

Let's state the same thing a little more formally. Suppose we have a set consisting of strings of characters of some alphabet, and there are rules by which a subset S can be selected from these strings so-called statements - that is, grammatically meaningful phrases, each of which is true or false. We can say that there is a function P that assigns one of two values ​​to statements from S: TRUE or FALSE (that is, maps them to a Boolean set B of two elements).

Let's call this pair- a set of statements S and a function P from >S to B - "language of statements". Note that in the everyday sense the concept of language is somewhat broader. For example, the Russian phrase " Well, come here!"is not true and not false, that is, from the point of view of mathematical logic, it is not a statement.

For what follows, we need the notion of an algorithm. I will not give its formal definition here - this would lead us quite far aside. I will limit myself to the informal: "algorithm" is a sequence of unambiguous instructions ("program"), which, in a finite number of steps, translates the initial data into a result.

The italicized is fundamentally important - if the program gets hung up on some initial data, then it does not describe the algorithm. For simplicity and in application to our case, the reader can consider that an algorithm is a program written in any programming language known to him, which, for any input data from a given class, is guaranteed to complete its work with a Boolean result.

Let us ask ourselves: for every function P there is a “proving algorithm” (or, in short, “ deductics”) equivalent to this function, that is, converting each statement to exactly the same boolean value as it? More concisely, the same question can be formulated as follows: Is every function over a set of propositions computable?

As you can already guess, it follows from the validity of TGN that no, not any - there are non-computable functions of this type. In other words, Not every correct statement can be proven.

It may very well be that this statement will cause you an internal protest. This is due to several circumstances. First, when we are taught school mathematics, sometimes there is a false impression of the almost complete identity of the phrases "Theorem X is true" and "It is possible to prove or verify Theorem X."

But if you think about it, it's not at all obvious. Some theorems are proved quite simply (for example, by enumeration of a small number of options), and some are very difficult. Recall, for example, the famous Great Fermat's theorem:

There are no natural x,y,z and n>2 such that xn+yn=zn,

the proof of which was found only three and a half centuries after the first formulation (and it is far from elementary). FROM It is necessary to distinguish between the truth of a statement and its provability. It does not follow from anywhere that there are no true, but unprovable (and not fully verifiable) statements.

The second intuitive argument against TGN is more subtle. Suppose we have some unprovable (within the framework of this deductive) statement. What prevents us from accepting it as a new axiom? Thus, we will slightly complicate our system of proofs, but this is not terrible.

This argument would be perfectly correct if there were a finite number of unprovable propositions. In practice, the following might happen: after postulating a new axiom, you will stumble upon a new unprovable statement. Take it as another axiom - you will stumble upon the third. And so on ad infinitum.

They say that deduction will remain incomplete. We can also take forceful measures so that the proving algorithm ends after a finite number of steps with some result for any statement of the language. But at the same time, he will begin to lie - lead to the truth for incorrect statements, or to lies - for the faithful.

In such cases, the deductive is said to be inconsistent. Thus, another formulation of the TGN sounds like this: There are propositional languages ​​for which a complete consistent deduction is impossible.- hence the name of the theorem.

Sometimes called "Gödel's theorem" is the statement that any theory contains problems that cannot be solved within the framework of the theory itself and require its generalization. In a sense this is true, although such a formulation obscures the issue rather than clarifies it.

I also note that if we were talking about the usual functions that map the set of real numbers into it, then the “non-computability” of the function would not surprise anyone (just do not confuse “computable functions” and “computable numbers” - these are different things).

Kurt Gödel

Any schoolchild knows that, say, in the case of the sin⁡x function, you must be very lucky with the argument so that the process of calculating the exact decimal representation of the value of this function ends in a finite number of steps.

And most likely you will calculate it using an infinite series, and this calculation will never lead to an exact result, although it can come as close as you like - simply because the value of the sine of most of the arguments is irrational. TGN simply tells us that even among functions whose arguments are strings and whose values ​​are zero or one, non-computable functions, although arranged in a completely different way, also exist.

For what follows, we will describe the "language of formal arithmetic". Let us consider a class of text strings of finite length, consisting of Arabic numerals, variables (letters of the Latin alphabet) taking natural values, spaces, signs of arithmetic operations, equality and inequality, quantifiers ∃ (“exists”) and ∀ (“for any”) and, maybe some other symbols (their exact number and composition are unimportant for us).

It is clear that not all such lines are meaningful (for example, "12=+∀x>" is nonsense). The subset of meaningful expressions from this class (that is, strings that are true or false in terms of ordinary arithmetic) will be our set of statements.

Examples of formal arithmetic statements:

    1=1

    2×2=5

    ∃xx>3

    ∀y∀zy×z>y+ z

etc. Now let's call a "formula with a free parameter" (FSP) a string that becomes a statement if a natural number is substituted into it as this parameter. Examples of FSP (with parameter x):

    x=0

    2x2=x

    ∃yx+y>x

etc. In other words, FSPs are equivalent to functions of a natural argument with a Boolean value.

We denote the set of all FSPs by the letter F. It is clear that it can be ordered (for example, first we write out one-letter formulas ordered alphabetically, followed by two-letter formulas, etc.; it is not fundamental for us which alphabet will be used for ordering). Thus, any FSP corresponds to its number k in the ordered list, and we will denote it by Fk.

Let us now turn to a sketch of the proof of TGN in the following formulation:

For the propositional language of formal arithmetic, there is no complete consistent deduction.

We will prove by contradiction.

So let's assume that such a deductive exists. Let us describe the following auxiliary algorithm A, which associates a natural number k with a Boolean value as follows:

1. Find the k-th formula in the list F.

2. Substitute the number k into it as an argument.

3. We apply our proving algorithm to the received statement (according to our assumption, it exists), which translates it into TRUE or FALSE.

4. Apply logical negation to the result.

Simply put, the algorithm results in the value TRUE if and only if the result of substituting into the FSP its own number in our list gives a false statement.

Here we come to the only place where I will ask the reader to take my word for it.

Obviously, under the above assumption, any FSP from F can be associated with an algorithm containing a natural number at the input and a Boolean value at the output.

Less obvious is the opposite:

Lemma: Any algorithm that converts a natural number into a Boolean value corresponds to some FSP from the set F.

The proof of this lemma would require at least a formal, not an intuitive, definition of the notion of an algorithm. However, if you think about it a little, it is quite plausible.

Indeed, algorithms are written in algorithmic languages, among which there are such exotic ones as, for example, Brainfuck, consisting of eight single-character words, in which, nevertheless, any algorithm can be implemented. It would be strange if the richer language of formal arithmetic formulas that we have described would turn out to be poorer - although, no doubt, it is not very suitable for ordinary programming.

After passing this slippery place, we quickly get to the end.

So, above we described Algorithm A. According to the lemma I asked you to believe, there exists an equivalent FSP. It has some number in the list F - let's say n. Let us ask ourselves, what is Fn(n) equal to? Let it be TRUE. Then, by the construction of the algorithm A (and, hence, the equivalent function Fn), this means that the result of substituting the number n into the function Fn is FALSE.

The opposite is checked in the same way: Fn(n)=FALSE implies Fn(n)=TRUE. We have arrived at a contradiction, which means that the original assumption is wrong. Thus, for formal arithmetic, there is no complete consistent deduction. Q.E.D.

Here it is appropriate to recall Epimenides, who, as you know, declared that all Cretans are liars, being himself a Cretan. In a more concise formulation, his statement (known as the "liar paradox") can be formulated like this: I lie". It is precisely such a statement, which itself proclaims its falsity, that we have used for the proof.

In conclusion, I want to note that TGN does not claim anything particularly surprising. After all, everyone has long been accustomed to the fact that not all numbers can be represented as a ratio of two integers (remember, this statement has a very elegant proof that is more than two thousand years old?).And the roots of polynomials with rational coefficients are also not all numbers . And now it turned out that not all functions of a natural argument are computable.

The sketch of the proof given was for formal arithmetic, but it is not difficult to see that THN applies to many other propositional languages ​​as well. Of course, not all languages ​​are like that. For example, let's define a language like this:

"Any phrase in the Chinese language is a true statement if it is contained in Comrade Mao Tse Tung's quote book, and is incorrect if it is not contained."

Then the corresponding complete and consistent proving algorithm (it can be called "dogmatic deductive") looks something like this:

“Flip through Comrade Mao Tse Tung’s quote book until you find the statement you are looking for. If it is found, then it is true, and if the quote book is over, and the statement is not found, then it is false.

Here we are saved by the fact that any citation is obviously finite, so the process of "proving" will inevitably end. Thus, TGN is inapplicable to the language of dogmatic statements. But we were talking about complex languages, right? published

Uspensky V.A.

Gödel's incompleteness theorem. 1994.

Theoretical Computer Science 130,1994, pp.273-238.

Perhaps Gödel's incompleteness theorem is truly unique. Unique in that they refer to it when they want to prove "everything in the world" - from the presence of gods to the absence of reason. I have always been interested in a more "primary question" - and which of those who refer to the incompleteness theorem could not only formulate it, but also prove it? I publish this article for the reason that it presents a very accessible formulation of Gödel's theorem. I recommend that you first read the article by Tullio Regge Kurt Gödel and his famous theorem

The conclusion about the impossibility of a universal criterion of truth is

a direct consequence of the result obtained by Tarski by combining

Gödel's undecidability theorem with his own theory of truth, according to

which there can be no universal criterion of truth even for relatively

narrow area of ​​number theory, and hence for any science that uses

arithmetic. Naturally, this result applies a fortiori to the concept of truth

in any non-mathematical field of knowledge in which it is widely

arithmetic is used.

Karl Popper

Uspensky Vladimir Andreevich was born on November 27, 1930 in Moscow. Graduated from the Faculty of Mechanics and Mathematics of Moscow State University (1952). Doctor of Physical and Mathematical Sciences (1964). Professor, Head of the Department of Mathematical Logic and Theory of Algorithms of the Faculty of Mechanics and Mathematics (1966). Reads courses of lectures "Introduction to Mathematical Logic", "Computable Functions", "Gödel's Completeness Theorem". Prepared 25 candidates and 2 doctors of sciences

1. Statement of the problem

The incompleteness theorem, the exact formulation of which we will give at the end of this chapter, and perhaps later (if the reader is interested in this) and the proof, states approximately the following: under certain conditions in any language there are true, but unprovable statements.

When we formulate a theorem in this way, almost every word requires some explanation. Therefore, we will begin by explaining the meaning of the words we use in this formulation.

We will not give the most general possible definition of a language, preferring to confine ourselves to those language concepts that we will need later. There are two such concepts: "the alphabet of the language" and "the set of true statements of the language".

1.1.1. Alphabet

By alphabet, we mean a finite set of elementary signs (that is, things that cannot be broken down into component parts). These characters are called letters of the alphabet. By a word of an alphabet we mean a finite sequence of letters. For example, ordinary words in English (including proper names) are words of the 54-letter alphabet (26 small letters, 26 uppercase letters, a dash and an apostrophe). Another example - natural numbers in decimal notation are words of a 10-letter alphabet, whose letters are signs: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. We will use ordinary capital letters to denote alphabets. If L is an alphabet, then L? will denote the set of all words of the alphabet L, - words formed from its letters. We will assume that any language has its own alphabet, so that all expressions of this language (ie - names of various objects, statements about these objects, etc.) are words of this alphabet. For example, any sentence in the English language, as well as any text written in English, can be considered as a word of the extended 54-letter alphabet, which also includes punctuation marks, interword space, a red line character, and possibly some other useful characters. Assuming that the expressions of the language are words of some alphabet, we thus exclude from consideration "layered" expressions of the type ???f(x)dx. However, this limitation is not too significant, since any such expression, using suitable conventions, can be "stretched" into a linear form. Any set M contained in L? is called a word set of the alphabet L. If we simply say that M is a word set, then we mean that it is a word of some alphabet. Now the language assumption above can be rephrased as follows: in any language, any set of expressions is a word set.

1.1.2. Lots of true claims

We assume that we are given a subset T of the set L? (where L is the alphabet of some language we are considering), which is called the set of "true statements" (or simply "truths"). Passing directly to the subset T, we omit the following intermediate steps of reasoning: firstly, which words of the alphabet L are well-formed expressions of the language, that is, they have a certain meaning in our interpretation of this language (for example, 2 + 3, x + 3, x=y, x=3, 2=3, 2=2 are well-formed expressions, while expressions like +=x are not); secondly, which expressions are formulas, i.e. may depend on a parameter (eg, x=3, x=y, 2=3, 2=2); thirdly, which of the formulas are closed formulas, i.e. statements that do not depend on parameters (for example, 2=3, 2=2); and finally, which closed formulas are true statements (for example, 2=2).

1.1.3. Fundamental language pair

1.2. "Unprovable"

"Unprovable" means having no evidence.

1.3. Proof

Despite the fact that the term "proof" is perhaps one of the most important in mathematics (the Bourbaki begin their book "Fundamentals of Mathematics" with the words: "From the time of the ancient Greeks, saying "mathematics" meant the same as saying "proof""), he does not have a precise definition. In general, the concept of proof with all its semantic branches belongs, rather, to the field of psychology than to mathematics. But be that as it may, proof is simply an argument that we ourselves find quite convincing in order to convince everyone else.

When written down, the proof becomes a word in some alphabet P, just as any English text is a word in the alphabet L, an example of which was given above. The set of all proofs forms a subset (and quite a large subset) of the set P?. We will not attempt to give a precise definition of this both "naive" and "absolute" concept of proof, or - which is equivalent - to define the corresponding subset of P?. Instead, we will consider a formal analogue of this vague concept, for which we will still use the term "proof" in what follows. This analog has two very important features that distinguish it from the intuitive concept (although the intuitive idea of ​​the proof still reflects these features to some extent). First of all, we assume that there are different conceptions of proof, that is, different subsets of proofs in P? are allowed, and even more than that: we will, in fact, assume that the alphabet of proofs of P itself can change. In what follows, we will require that for each such conception of a proof there is an efficient method, in other words, an algorithm that would necessarily determine whether a given word of the alphabet P is a proof or not. We also assume that there is an algorithm that can always be used to determine which statement a given proof proves. (In many situations, the statement being proved is simply the last statement in the sequence of steps that form the proof.)

Thus, our final wording of the definition is as follows:

(1) We have the alphabet L (the alphabet of the language) and the alphabet P (the alphabet of the proof).

(2) We are given a set P which is a subset of P? and whose elements are called "proofs". In the future, we will assume that we also have an algorithm that allows us to determine whether an arbitrary word of the alphabet P is an element of the set P, that is, a proof, or not.

(3) Also we have a function? (for finding what exactly has been proven), whose domain is? satisfies the condition P???P?, and whose range is in P?. We assume that we have an algorithm that calculates this function (the exact meaning of the words "algorithm calculates a function" is the following: the values ​​of the function are obtained using this algorithm - a set of special transformation rules). We will say that the element p? P is a proof of the word?(p) of the alphabet L.

A triple that satisfies conditions (1)-(3) is called a deductive system over the alphabet L.

For the reader familiar with the usual way of defining "proof" in terms of "axiom" and "rule of inference", we will now explain how this method can be regarded as a special case of the definition given in section 1.3.2. That is, a proof is usually defined as a sequence of such language expressions, each of which is either an axiom or previously obtained from already existing statements using one of the inference rules. If we add a new word * to the alphabet of our language, then we can write such a proof as a word composed using the resulting alphabet: the sequence of expressions becomes the word C1*C2*...*Cn. In this case, the function that determines what exactly has been proved has its value in the part of this word immediately following the last letter * in the sequence. The algorithm whose existence is required in Section 1.3.2. definitions, can easily be constructed once we have precisely defined any of the accepted meanings of the words "axiom" and "rule of inference".

1.4. Attempts to accurately formulate the incompleteness theorem

1.4.1. First try

"Under certain conditions, for the fundamental pair of the language of the alphabet L and the deductive system over L, there always exists a word in T that has no proof." This option still looks vague. In particular, we could easily come up with any number of deductive systems that have very few provable words. For example, in an empty deductive system (where P = ?) there are no words at all that would have proofs.

1.4.2. Second try

There is another, more natural approach. Suppose we are given a language - in the sense that we are given a fundamental pair of this language. Now we will look for such a deductive system over L (intuitively, we are looking for a proof technique) with which we could prove as many words from T as possible, in the limit all words from T. Gödel's theorem describes the situation in which such a deductive system ( by which every word in T would be provable) does not exist. Thus, we would like to formulate the following statement:

"Under certain conditions regarding the fundamental pair, there is no such deductive system in which every word from T would have a proof."

However, such a statement is obviously false, since it is only necessary to take a deductive system in which P = L, P = P? and?(p) = p for all p in P?; then every word from L? is trivially provable. Therefore, we need to accept some limitation on which deductive systems we use.

1.5. Consistency

It would be quite natural to require that only "true statements", that is, only words from T, can be proved. We will say that a deductive system is consistent with respect to a fundamental pair if?(P)?T. In all subsequent reasoning, we will be interested only in such consistent deductive systems. If we are given a language, then it would be extremely tempting to find such a consistent deductive system in which every true statement would have a proof. The variant of Gödel's theorem that interests us exactly states that under certain conditions with respect to the fundamental pair, it is impossible to find such a deductive system.

1.6. completeness

It is said that a deductive system is complete with respect to a fundamental pair, provided that ?(P)?T. Then our formulation of the incompleteness theorem takes the following form:

Under certain conditions with respect to the fundamental pair, there is no such deductive system over L that would be both complete and relatively consistent.

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, at which David Hilbert (1862-1943) presented in the form of abstracts the 23 most important, in his opinion, problems formulated by him, which were to be solved by theoretical scientists of the coming twentieth century. Number two on his list was one of those simple problems that seem obvious until you dig a little deeper. In modern terms, it was the question: is mathematics sufficient on its own? Hilbert's second problem was to prove rigorously that the system axioms- the basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows you to mathematically describe everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. Standard Euclidean planimetry(geometry on the plane) it is possible to prove unconditionally that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931, some Viennese bespectacled mathematician Kurt Godel took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So Gödel simply proved the following amazing property any axiom systems:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 not provable", then it is possible to prove the validity of the statement "Assumption 247 provably". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will be left with “type A” statements that are obviously true or false - and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So the wording first,or weak Gödel's incompleteness theorems: "Any formal system of axioms contains unresolved assumptions." But Gödel did not stop there, formulating and proving second, or strong Godel's incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.”

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems could be used to prove fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this, the human brain is superior to a computer shackled by pure logical circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is capable to make decisions, and the Turing test will pass.

I wonder if Hilbert had any idea how far his questions would take us?

Kurt Godel, 1906-78

Austrian, then American mathematician. Born in Brünn (Brünn, now Brno, Czech Republic). He graduated from the University of Vienna, where he remained a teacher in the Department of Mathematics (since 1930 - a professor). In 1931 he published a theorem that later received his name. Being a purely apolitical person, he extremely hard survived the murder of his friend and department employee by a Nazi student and fell into a deep depression, the relapses of which haunted him until the end of his life. In the 1930s, he emigrated to the United States, but returned to his native Austria and got married. In 1940, at the height of the war, he was forced to flee to America in transit through the USSR and Japan. For some time he worked at the Princeton Institute for Advanced Study. Unfortunately, the psyche of the scientist could not stand it, and he died of starvation in a psychiatric clinic, refusing to eat, because he was convinced that they intended to poison him.

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, at which David Hilbert (1862–1943) presented in the form of theses the 23 most important, in his opinion, problems formulated by him, which were to be solved by theoretical scientists of the coming twentieth century. Number two on his list was one of those simple problems that seem obvious until you dig a little deeper. In modern terms, it was the question: is mathematics sufficient on its own? Hilbert's second task was reduced to the need to strictly prove that the system of axioms - basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows mathematical description of everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. In standard Euclidean planimetry (geometry on a plane), it can be unconditionally proved that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931, some Viennese bespectacled mathematician Kurt Godel took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So, Gödel simply proved the following amazing property of any system of axioms:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 is not provable", then it is also possible to prove the validity of the statement "Assumption 247 is provable". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will still have “type A” statements that are obviously true or false, and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So, the formulation of Gödel's first, or weak, incompleteness theorem is: "Any formal system of axioms contains unresolved assumptions." But Gödel did not stop there, formulating and proving Gödel's second or strong incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.”

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems could be used to prove fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this, the human brain is superior to a computer shackled by pure logical circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is able to make decisions, and the Turing test will pass.

I wonder if Hilbert had any idea how far his questions would take us?

Kurt GOEDEL
Kurt Godel, 1906–78

Austrian, then American mathematician. Born in Brünn (Brünn, now Brno, Czech Republic). He graduated from the University of Vienna, where he remained a teacher in the Department of Mathematics (since 1930 - a professor). In 1931 he published a theorem that later received his name. Being a purely apolitical person, he extremely hard survived the murder of his friend and department employee by a Nazi student and fell into a deep depression, the relapses of which haunted him until the end of his life. In the 1930s, he emigrated to the United States, but returned to his native Austria and got married. In 1940, at the height of the war, he was forced to flee to America in transit through the USSR and Japan. For some time he worked at the Princeton Institute for Advanced Study. Unfortunately, the psyche of the scientist could not stand it, and he died of starvation in a psychiatric clinic, refusing to eat, because he was convinced that they intended to poison him.

Comments: 0

    How does the scientific model develop in the natural sciences? Everyday or scientific experience accumulates, its milestones are neatly formulated in the form of postulates and form the basis of the model: a set of statements accepted by everyone who works within this model.

    Anatoly Wasserman

    In 1930, Kurt Gödel proved two theorems that, translated from mathematical language into human language, mean something like this: Any system of axioms rich enough to be used to define arithmetic will either be incomplete or inconsistent. An incomplete system means that a statement can be formulated in the system, which can neither be proved nor disproved by means of this system. But God, by definition, is the ultimate cause of all causes. Mathematically, this means that the introduction of the axiom about God makes our entire axiom complete. If there is a God, then any statement can either be proved or refuted, referring, one way or another, to God. But according to Gödel, the complete system of axioms is inevitably contradictory. That is, if we believe that God exists, then we are forced to come to the conclusion that contradictions are possible in nature. And since there are no contradictions, otherwise our whole world would crumble from these contradictions, one has to come to the conclusion that the existence of God is incompatible with the existence of nature.

    Sosinsky A. B.

    Gödel's theorem, along with the discovery of relativity, quantum mechanics, and DNA, is generally regarded as the greatest scientific achievement of the 20th century. Why? What is its essence? What is its meaning? Alexey Bronislavovich Sosinsky, mathematician, professor at the Independent Moscow University, officer of the Order of Academic Palms of the French Republic, laureate of the RF Government Prize in the field of education in 2012, addresses these issues in his lecture within the framework of the Polit.ru Public Lectures project. In particular, several different formulations of it were given, three approaches to its proof were described (by Kolmogorov, Chaitin, and Gödel himself), and its significance for mathematics, physics, computer science, and philosophy was explained.

    Uspensky V. A.

    The lecture is devoted to the syntactic version of Gödel's Incompleteness Theorem. Gödel himself proved the syntactic version using a stronger assumption than consistency, namely the so-called omega-consistency.

    Uspensky V. A.

    Lectures of the Summer School "Modern Mathematics", Dubna.

Editor's Choice
Fish is a source of nutrients necessary for the life of the human body. It can be salted, smoked,...

Elements of Eastern symbolism, Mantras, mudras, what do mandalas do? How to work with a mandala? Skillful application of the sound codes of mantras can...

Modern tool Where to start Burning methods Instruction for beginners Decorative wood burning is an art, ...

The formula and algorithm for calculating the specific gravity in percent There is a set (whole), which includes several components (composite ...
Animal husbandry is a branch of agriculture that specializes in breeding domestic animals. The main purpose of the industry is...
Market share of a company How to calculate a company's market share in practice? This question is often asked by beginner marketers. However,...
The first mode (wave) The first wave (1785-1835) formed a technological mode based on new technologies in textile...
§one. General data Recall: sentences are divided into two-part, the grammatical basis of which consists of two main members - ...
The Great Soviet Encyclopedia gives the following definition of the concept of a dialect (from the Greek diblektos - conversation, dialect, dialect) - this is ...