Unfashionable

For some things Harvard suffices; this blog is for the rest.

What is Mathematics, really?

This essay will hopefully clear up a common confusion most people have about the nature of mathematics. The confusion is a consequence of the way mathematics is taught in schools and universities. I speak of the picture of mathematics as a discipline without uncertainty, where disagreements only last until a proof is found, and therefore, everyone agrees on the same mathematical “reality.“

Unfortunately, this picture is false. Even mathematics does not reveal absolute certain truths. For a more accurate picture of mathematics, we have to depart on a long journey through foundationalism, formalism, set theory, Gödel's incompleteness theorems (yes, there are two of them), and epistemology. In the end, we shall arrive not only at a more accurate picture of mathematics but also understand what physics has to do with all of this.

ABCDrama

A good place to start our journey is the controversy surrounding the proof of the abc conjecture. This conjecture is one of the conjectures of number theory that are easy to state but extremely difficult to prove. It is about the equation a+b=c, where a, b, and c are positive integers and relative primes—i.e., they do not share any prime factor besides 1. In other words, you cannot divide these three numbers by one other number and get only positive integers. For example, 8 + 9 = 17 satisfies this condition, but 6 + 9 = 15 does not, since 6, 9, and 15 are all divisible by 3.

Informally, the conjecture states that the product of the distinct prime factor of ABC (called d) is rarely much smaller than c. We will not go into more detail here because the specifics of this conjecture are not very important for our purpose. However, rest assured that there is a way to rigorously express this informal statement inside of mathematics and that this conjecture does have wide-ranging implications.

The interesting part for us is that there was no consensus regarding the validity of a proof for almost six years. (Today, it is widely accepted that the proof is wrong.) I will try to summarize what happened.

In 2012, Shinichi Mochizuki published a series of papers—together 500 pages long—claiming to have finally found a proof for the famous abc conjecture. Although no one could find a serious error, many number theorists remained skeptical. A few months after the proof was published, Cathy O'Neil published a blog post called The ABC Conjecture has not been proved. The discussion kept on going, and in 2017, Frank Calegari published a blog post titled The ABC conjecture has (still) not been proved. However, in this blog post, he still did not discuss a concrete error with the proof. Finally, at the end of 2018, Peter Scholze and Jakob Stix published a paper detailing a problem, which was already discussed in private circles (for mathematicians these days, this includes the comments of blog posts; I am not kidding, look at the comment of, say, this post), with the Corollary 3.12 in the third paper. Scholze and Six published their paper after a meeting with Mochizuki, which was summarized by Quanta Magazine as follows:

"But the meeting led to an oddly unsatisfying conclusion: Mochizuki couldn’t convince Scholze and Stix that his argument was sound, but they couldn’t convince him that it was unsound."

How is this possible? A proof which is discussed for six years, and a meeting of three mathematicians, where nobody could convince the other side of their "arguments." If you have the standard view of mathematics, the word "argument" alone makes no sense. Is not mathematics the discipline where you do not have to argue? You just find a proof, and that is it, right? To answer these questions, we have to go back to the beginning of the 20th century, where mathematicians were facing a similar problem.

A Simple Proof

At this time, mathematics was becoming more and more abstract. Mathematicians were worried about finding logical gaps or contradictions because some proofs were perceived to lack rigor. This was a very serious concern, which became a full crisis—the foundational crisis of mathematics. Nobody was sure if not somewhere "at the bottom," a mistake had introduced a false theorem, which could bring down the whole structure that mathematicians had built up. It was the great aim of this era to put mathematics on a rigorous and formal foundation and to define which methods of proof are valid and which are not so everyone could check the validity of a proof without any doubt.

Arguably the most influential person in this search for a foundation of mathematics was the German mathematician David Hilbert. At the beginning of the 20th century, he proposed Hilbert's Program: a solution—or more accurately, a plan for finding a solution—to the foundational crisis of mathematics.

Hilbert was a so-called formalist. In his view, mathematics—especially higher mathematics—was just the manipulation of meaningless symbols. He called mathematics a "formula game." If one could only find the correct rules to manipulate these symbols, together with the right axioms, mathematics would merely be the mechanical application of known rules. To understand this formalist view of mathematics, which is strongly connected to the foundationalist view, we have to dive into the study of set theory.

Too Naive

At first, people started by studying what is now called naive set theory. This version of set theory deals with discrete sets that can be described with natural language and visualized by Venn diagrams. It is what most people have encountered in school. The problem—and the reason why it is now called "naive"—is that this approach runs into various paradoxes. The most famous of these paradoxes is Russel's paradox, which, when expressed in natural language, states the following:

Consider a set (S) that contains all sets that are not members of themselves. If S does not contain itself, then—by definition—it must contain itself, and if S does contain itself, then it contradicts the definition.

Another version that might be easier to understand can be expressed like this:

A barber cuts the hair of everyone in town who does not cut their own hair. Does he cut his own hair? If he does not, he has—by definition—to cut his own hair, which he can't do because he only cuts the hair of people who do not cut their own hair—a paradox.

It is not very important to understand the paradox here. It merely serves as an example of why a more sophisticated version of set theory was developed. A version that —so it was hoped—would finally discover a provably secure foundation of mathematics.

Set Theory

When mathematicians talk about set theory, they are almost always talking about axiomatic set theory. I want to introduce the subject with an example from Douglas Hofstadter's brilliant book Gödel, Escher, Bach. It is one of my favorite books of all time—maybe even my favorite book of all time—so if you have not read it, buy it right now and read it (after finishing this essay, of course).

Douglas Hofstadter developed a formal system (or axiomatic system; I use the terms interchangeably here) called MIU-System, for his book. In this system, strings can be formed by combining the symbols MI, and U. These three symbols are what is called the language of this system. Any string which consists of those three symbols is called well-formed. For example, the string MIUU is well-formed, while the string MIT is not. Now that we have defined the language of our system, we have to define our axioms. In this case, there is only one axiom, the string MI. We also need some rules that govern how to create a new string from our axiom. In this formal system, there are four rules:

  1. If you possess a string whose last letter is I, you can add on a U at the end.
  2. Suppose you have Mx. Then you may add Mxx to your collection.
  3. If III occurs in one of the strings in your collection, you may make a new string with U in place of III.
  4. If UU occurs inside one of your strings, you can drop it.

A new string—also called theorem—can be derived by applying the rules to our axiom. Note that if we derive a theorem—e.g., the theorem MIU by applying the first rule to our axiom—we have, at the same time, proven this theorem inside our formal system. The derivation path is the formal proof of a theorem. If someone does not believe our claim that MIU is a valid—a "true"—theorem, we can show them the derivation path, which is, in this case, very short, and he should be convinced.

To get familiar with formal systems, you can try to derive the theorem MU. Hofstadterposed the same problem in GEB, and I recommend trying it yourself before looking up the solution. Even better, I will not link to the solution here and encourage you to buy Gödel, Escher, Bach to find it out yourself.

Margaritas on a Beach

Now that we have learned a little bit about set theory let us restate what David Hilbert was trying to do. He was trying to find a formal axiomatic system, which would act as the foundation of mathematics. This means he was searching for axioms and (logical) rules of interference from which ideally all of mathematics could be derived. Once these axioms and rules had been found, proving a theorem would mean mechanically applying the rules to the axioms until the desired theorem would eventually show up. Today, the application of rules could even be done by a computer, while mathematicians sit on a beach and drink Margaritas. So why are mathematicians not doing that? Part of the answer has to do with two important properties of sets: consistency and completeness.

The Perfect Foundation

Hilbert's program was not searching for just any foundation, it had to be a provably secure foundation. For Hilbert this meant a foundation with at least the following four properties:

  • A formulation of all mathematics; in other words, all mathematical statements should be written in a precise formal language, and manipulated according to well-defined rules.
  • Completeness: a proof that all true mathematical statements can be proved in the formalism.
  • Consistency: a proof that no contradiction can be obtained in the formalism of mathematics. This consistency proof should preferably use only "finitistic" reasoning about finite mathematical objects.
  • Conservation: a proof that any result about "real objects" obtained using reasoning about "ideal objects" (such as uncountable sets) can be proved without using ideal objects.
  • Decidability: there should be an algorithm for deciding the truth or falsity of any mathematical statement.

As mentioned above, we will focus on points two and three—completeness and consistency—because they are the most interesting and important.

Consistency

Consistency means it should not be possible to prove a contradiction from the axioms. After all, if we can prove a contradiction like 1=0, nothing means anything anymore, and all of mathematics crumbles. These consequences are described beautifully by Ted Chiang in his (fictional) short story "Division by Zero"; I highly recommend reading it.

It is important to note that we want not only a consistent system but a system that can prove its own consistency. This is the case because we want to prove the consistency of complex parts of mathematics—like analysis—with simpler parts. Then we want to prove the consistency of these simpler parts again with even simpler parts—all the way down to arithmetic (which is hopefully reducible to logic). If we can prove arithmetic consistent, but only by using a more complex system, we have not gained any security because we have proven the consistency using a more complex system that might be inconsistent. Trying to prove this more complex system consistent with another even more complex system leads to an infinite regress, and we have not learned anything.

It follows that we want to prove the consistency of arithmetic inside arithmetic, so we can then use arithmetic to prove other complex parts of mathematics consistent. Proving the consistency of arithmetic was the second problem of Hilbert's famous 23 problems—a list he published in 1900. We will discuss the "solution" to this problem shortly, but first, we have to take a look at the second important property.

Completeness

A formal system is complete if, and only if, all well-formed formula or their negation can be derived. In other words, all true statements that can be expressed in the language of the formal system, have to be provable. Incompleteness would mean that there are theorems that are well-formed and true but cannot be proven. There would be a gap between what is provably true and what is true—no Margaritas on the beach for mathematicians and more hard work. However, this property was, at the time, not seen as much of a problem because if someone would be able to find a true theorem, which could not be proven inside, e.g., arithmetic, one could simply add this theorem to the axioms and the set would be complete again. This works because axioms are per definition true. Sounds great, so what happened?

Gödel

In 1931, at the age of 25, Kurt Gödel published his seminal paper Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme (in English "On Formally Undecidable Propositions of Principia Mathematica and Related Systems"—a catchy title, I know). It is almost impossible to overstate the significance of Gödel's insight as people like John von Neumann knew very well:

"Kurt Gödel's achievement in modern logic is singular and monumental—indeed it is more than a monument, it is a landmark which will remain visible far in space and time. ... The subject of logic has certainly completely changed its nature and possibilities with Gödel's achievement."

Before we talk about his two theorems, I want the share a robust heuristic that I have developed based on the observation that most people know of only one of the theorems.

Sven's heuristic: If someone at your next cocktail party starts talking about Gödel's incompleteness theorem, without knowing there are two of them, everything they say about any topic should be considered bullshit.

Unfortunately, it is not possible to do justice to Gödel's theorems in anything less than the length of a book—after all, "Gödel, Escher, Bach" has 700+ pages. One reason for this is the far-reaching consequences of the theorems, which we will not be able to go into detail about in this essay. The other reason is that Gödel's proof is not what one would naively expect from a "mathematical proof," and we will come back to this point later.

Introduction

Before we talk about the first theorem, I want to quote the beginning of his paper because it summarizes very well what we have talked about so far (the translation quoted below is Bernard Meltzer's version. However, Gödel was never fully satisfied with the translations of his texts, so I recommend reading the original for anyone able to read German):

"The development of mathematics in the direction of greater exactness—as is well known—led to large tracts of it becoming formalized, so that proof can be carried out according to a few mechanical rules. The most comprehensive formal systems, yet set up are, on the one hand, the system of Principia Mathematica (PM) and, on the other, the axiom system for set theory of Zermelo-Fraenkel (later extended by J. v. Neumann). These two systems are so extensive that all methods of proof used in mathematics today, have been formalized in them, i.e. reduced to a few axioms and rules of interference. It may therefore be surmised that these axioms and rules of interference are also sufficient to decide all mathematical questions which can in any way at all be expressed formally in the systems concerned. It is shown below that this is not the case, and that in both the systems mentioned, there are in fact relatively simple problems in the theory for ordinary whole numbers which cannot be decided from the axioms."

Number One

The first incompleteness theorem says that for any sufficiently rich (and consistent) formal system, there are statements in the language of the formal system that cannot be proven or disproven—the system is incomplete. Put differently, provability is a weaker notion than truth in any given formal system.

In this context, sufficiently rich (also called "strong") refers to the ability of a formal system to prove things. You could construct a formal system with only one axiom and one simple rule of interference, which would be complete. However, this system would not be useful for much, and certainly not for doing arithmetic. You can either have a complete but (mostly) useless system or a useful but incomplete one. There is no way to have both.

For his proof, Gödel invented a new way of numbering theorems, now called Gödel numbering. Together with a version of Cantor's diagonal argument, this technique allowed Gödel to construct a statement which talks about itself—asserting its unprovability. The sentence translates very informally into natural language states: "This statement can't be proven inside this system."

If we were able to prove this statement, we would have a contradiction because the statement itself asserts its unprovability. A contradiction would mean that the system is inconsistent, and everything falls apart. To keep the assumption that our system is consistent, we have to conclude that the statement is true but cannot be proven. Now we have a true statement, which cannot be proven inside the system.

We know, looking from the "outside" of the system, that the statement must be true; nonetheless, we can never prove it inside the system. For those who want to read a more technical explanation of the proof, I recommend this article. I feel that most people, even after reading the linked article, will, unfortunately, still struggle to grasp the proof. The best advice I can give is to read Gödel, Escher, Bach.

After reading my brief natural language explanation of the proof, many readers will probably come away thinking that this is not a "real" proof. It feels like Gödel somehow "cheated," and there should be a way out of it. This intuition stems from the fact that we have in our natural language many self-referential statements like Gödel's, which have no "meaning" whatsoever. Because natural language is very vague, statements similar to Gödel's appear often, and without any consequences - nobody is worried they break our natural language. However, the difference is that Gödel produced such a statement inside a well-defined formal system, where every symbol is rigorously defined, and the resulting statement has a very clear "meaning." In other words, there is no easy way out. I put the term "meaning" in quotation marks because to rigorously define what this term means in our context would require a deep dive into Homomorphism and especially Isomorphism, which is definitely beyond the scope of this essay (again, GEB). Luckily for me, while writing this essay, 3Blue1Brown published a video in which he explains, among other things, isomorphism with the help of group theory. So for those who are interested, you can watch it here. I highly recommend it, but it is in no way necessary for this essay.

It might now be understandable why a certain richness is required for this proof. The reason is the self-referential nature of the statement - not every formal system can do that. However, as mathematicians came to understand, it is surprisingly difficult to construct a system where such a statement is impossible. For if one understands exactly how the statement refers to itself, it is clear that to ban this indirect self-reference, one has to destroy the whole system. And this is what I mentioned at the beginning of this section: to construct a complete set is to construct a useless set.

Computation

For readers familiar with computability theory this result should not be surprising. Another way to formulate what we have learned is that the threshold for the universality of formal systems is extremely low. This is obvious after understanding the computational universality of cellular automatons that Steven Wolfram explains in his A New Kind of Science. It turns out that, as with cellular automatons, formal systems are either simple or universal.

It also turns out that the first incompleteness theorem has a much simpler proof inside computer science. It is based on Turing's proof that the Halting problem is unsolvable—that is, there exists no program that can decide whether another program will halt or run forever. We will not prove this result here, but if you accept that the halting problem is unsolvable, the first incompleteness theorem is a simple consequence. Because if the incompleteness theorem were false, there would exist a complete formal system in which you can prove (or disprove) all statements about integers. The statement that a program halts can be expressed as a statement about integers. Now we just have to search through the complete formal system until we find a proof that the program does (or does not) halt. In other words, this algorithm would be able to solve the halting problem, which we already know is unsolvable. Therefore, our assumption that there exists such a complete formal system must be wrong. This emergence of Gödel's incompleteness out of Turing's work is one of those beautiful examples that show us a deep connection between mathematics and computation. (It must be so, of course, because computation is just constructivist mathematics, but we will discuss this point a little bit later on).

New Axioms?

The last way out of this whole situation would be, as mentioned above, to add the theorem to our axioms. Gödel knew this, of course, and showed in the very same paper why it does not help. The problem is that the new set, including Gödel's statement as one of its axioms, would be susceptible to the same procedure as the original set, creating a new statement, which could not be proven inside it. This was another unwelcome fact for the mathematicians of the 20th century.

Number Two

For the proof of incompleteness above, we assumed that our system was consistent because there would be no point in doing mathematics inside an inconsistent system—remember if 1=0, everything falls apart. The first theorem showed that most formal systems have true statements we cannot prove inside them, but can we at least prove that we will never run into a contradiction? Gödel answered with a clear no.

Using his numbering, Gödel constructed a proof similar to the first, where he showed that the statement "0=1 cannot be proven from the Axioms of this system" is unprovable (this is again a very informal presentation of his proof).

There is some debate over whether a different formalization of consistency might be provable because Gödel's proof only asserts the unprovability of one statement (that 1=0 cannot be proven), but not of any such statement (e.g., maybe it can be proven that 1 never equals 2). However, there has to date not no proof of consistency that would confirm this point, so it remains a philosophical subtlety.

A Sketchy Foundation

Gödel's work showed that the aims of Hilbert's program are unattainable. There is no provably consistent foundation of mathematics that can generate all the true theorems. Gödel proved that mathematicians are standing on a very sketchy foundation. At the time, this was a huge shock to many formalists and foundationalists alike.

Mystical Intuition

A way around this problem was the other popular philosophy of mathematics: intuitionism. Intuitionists hold that the certainty of mathematics does not, as the formalists and foundationalists believed, depend on the mechanical application of rules to a secure foundation, but rather on the intuition of the ideal mathematician. The founder of intuitionism, Dutch mathematician L.E.J. Brouwer, expressed his view as follows:

"Completely separating mathematics from mathematical language and hence from the phenomena of language described by theoretical logic, recognizing that intuitionistic mathematics is an essentially languageless activity of the mind having its origin in the perception of a move of time. This perception of a move of time may be described as the falling apart of a life moment into two distinct things, one of which gives way to the other, but is retained by memory. If the twoity thus born is divested of all quality, it passes into the empty form of the common substratum of all twoities. And it is this common substratum, this empty form, which is the basic intuition of mathematics."

If this description does not make much sense to you, do not worry, the problem lies with intuitionism. Based on this intuition—which unfortunately seems to differ between mathematicians—basic logical principles such as the law of the excluded middle are rejected. This is all part of the constructivist view of mathematics, in which only those entities exist that can be constructed. It is therefore not possible, as done in "classical" mathematics, to prove the existence of a mathematical object by assuming its non-existence, which then leads to a contradiction.

For example, infinities do not exist for most intuitionalists and constructivists because they cannot be constructed—although ironically, some intuitionalists seem to have a different intuition about infinities. The basic argument against the existence of infinity goes as follows. A mathematician—even the ideal one—is a finite being and can only execute finitely many operations or constructions. It is therefore impossible to construct the infinity of natural numbers, for a mathematician is only able to perform a finite number of "the successor of number X" constructions. The infinity of natural numbers is sometimes also called "potential" infinity because, in the view of intuitionists, it is not "real."

At first, this argumentation seems true, but there is one fatal mistake: equating logical constructions or operations with physical ones. The claim is that because we can only perform a finite number of physical operations, we are only able to perform a finite number of "logical operations." This is wrong and the same mistake that underlies Zeno's paradox. (The claim that humans can perform "logical operations" at all is wrong.)

In one version of this paradox, Achilles is in a footrace against a tortoise, and it is argued that with a little head start, he will never catch up—no matter how fast he runs. The reason is that to catch up, Achilles has to run to the point where the tortoise currently is, but when he gets there, the tortoise will have moved a little, and when Achilles gets to this new position, the tortoise will have to move a little again, and so on ad infinitum. In other words, to catch up, Achilles needs to perform an infinite number of "move to tortoise's position" operations, which he, as a finite being in finite time, cannot do.

The fallacy is the same as the one finitists fall into. What is going to happen in the physical world is not decided by logic but depends on the physical laws. If these laws say that Achilles will catch up, he will do so, no matter the infinite logical steps. This is a crucial point. It is false to assume that because it takes an infinite number of "logical operations," it has to take an infinite number of physical operations. Every time you walk a certain distance, in a finite amount of time, you perform an infinite number of "move to the next point" operations because any distance can be broken up into an infinite number of points. However, in the physical world, only the physical laws decide what is and is not possible. As I have said, it is a mistake to think humans can perform logical operations at all. Everything we do is a physical process governed by physical laws—not by logical or mathematical laws.

(Note that there is also a mathematical "solution" to this paradox, which is based on the fact that the sum of an infinite amount of ever-smaller parts is finite. This solution is false because even if this sum were infinite (assuming the same physical laws), Achilles would still catch up to the tortoise. Abstract mathematics simply does not matter here. What matters are the laws of physics.)

So formalism and intuitionalism do not deliver the promised absolute certainty and are more importantly not a correct description of how mathematics is done. So how do mathematicians know all of this stuff if not through intuition or the application of mechanical rules?

Explanations, all the way down

Mathematicians do what all other scientists do: they conjure up explanations and criticize them. Therefore, the only way to know whether an explanation—a proof—is correct is to understand it. However, even if we are absolutely certain that the proof is correct, we might be mistaken. The picture of mathematics as a discipline that discovers infallible—absolutely certain—truths is wrong. It is also false that mathematics discovers truths only by induction, as the formalists believed.

Gödel, again

Gödel's proof not only destroyed the hope of a secure foundation of mathematics, but he also did it with a new method of proof. He proofed it in a way that is itself an example of how wrong the formalist's view of mathematics is. It was believed that all mathematical proofs could be, at least in principle, restated as a sequence of inductions—i.e., a sequence like this:

  • Socrates is a man.
  • All men are mortal.
  • Therefore is Socrates mortal.

The last statement follows logically follow from both of the axioms and is therefore taken to be "self-evidently true." It is nonetheless a mistake to assume that all proofs that do not follow this line of reasoning are therefore less "true" or not acceptable.

Once understood, Gödel's proof is just as "self-evidently true" as a proof by induction. There is no way, as formalists and intuitionists alike hoped, to classify all valid methods of proof, because as shown by Gödel, people invent new methods of proof. The only way to judge the validity of a new method of proof is to understand it, which cannot be done before it is invented. But remember, we might always be mistaken, even about things we hold as "self-evidently true"—some might say, especially about those things.

However, this is not the only problem with a pure inductivist or formalist view of mathematics. Even applying mechanical rules does not lead to the promised absolute certainty. We still need to rely on explanations.

The Nature of Proofs

The problem with extremely formal proofs is that it takes many steps to prove something very simple. It famously took Alfred North Whitehead and Bertrand Russell close to 400 pages in their Principia Mathematica to prove that 1+1=2. (Principia Mathematica is also the system Gödel used in his paper as an example to demonstrate incompleteness.)

This should make obvious why a formal proof by hand is very unpractical. And not only that, humans will make mistakes even if they just have to apply mechanical rules. So can we really be certain of the validity of a formal proof that took 700 pages and relies on some non-provable axioms? Is not the same result coming from a shorter, high-level explanation just as valid?

One might argue that humans make mistakes while applying these mechanical rules, but today we have computers to do it. This is true, but it does not provide us with absolute certainty because even computers make mistakes. There are bugs in software, and due to the fuzzy nature of physical reality, it is not possible to prevent some bits from being flipped. So if a formal proof is long enough - let's say 100 billion steps in a formal system - then it is very likely that our computer will have made a mistake somewhere.

More importantly, we need to understand how a computer applies these rules: how the software works, how the computation is implemented physically, how error correction works, and so on. In other words, computer proofs are worth nothing without explanations.

Imagine we had a computer that told us if any theorem we put in was correct or not, but we did not know how it works. Now Imagine we put in, say, the Riemann hypothesis, and the computer says it is correct. Have we learned anything?—no. Sure, the computer might be right, but it might also be wrong. The only way to know is to understand what it does. We need, for example, the person who build the computer and wrote the software, to explain how it works for us. Only after we have understood this explanation we can judge the validity of the proof.

In addition to that, this view implicitly assumes that the most valuable thing about any proof is its result. Interestingly enough, this is false as well. Take, for example, prime numbers. Which numbers are primes and which are not is already decided by the definition, so what is there to do?—Is not knowing if a number is prime or not the most important question? No, it is not. Even though all prime numbers are defined, we have not understood them. What mathematicians do is try to understand how they are distributed; why they are distributed in the way they are, and other similar questions. And even in a proof of a theorem about, say, the distribution of primes, the result (is the theorem true or false) is not the most important part. The most important part is how this fact is proven (the explanation) and if this proof reveals some previously unnoticed connections. As David Deutsch puts it:

“Necessary truth is merely the subject-matter of mathematics, not the reward we get for doing mathematics. The objective of mathematics is not, and cannot be, mathematical certainty. It is not even mathematical truth, certain or otherwise. It is, and must be, mathematical explanation.“

This is a bold assertion and deserves some further discussion. Luckily for us, the proof of the four-color theorem provides a great example. The four-color theorem was one of the most famous theorems in mathematics where no proof could be found for a long time. Informally it states that you only need 4 different colors to color the different regions of an arbitrary map so no two adjacent regions have the same color. The same theorem was already proven true for five colors, but the proof for four colors eluded mathematicians until 1976. Kenneth Appel and Wolfgang Haken proofed the theorem by reducing the search space of maps to a couple of thousands and then letting a computer search through all the maps. The computer did not find a counterexample, which proved the theorem. At the time, the proof was very controversial and not well accepted due to a couple of factors. The first one was the unreliability of computers, but the second, and more important one, was that it does not feel like a proof. As Scott Aaronson explains in Quantum computing since Democritus:

“Why did some mathematicians look askance at the proof, or at least hold out hope for a better one? Because the computer “might have made a mistake”? Well, that’s a feeble argument, since the proof has now been rechecked by several independent groups of programmers using different hard- ware and software, and at any rate, humans make plenty of mistakes too! What it boils down to, I think, is that there is a sense in which the Four-Color Theorem has been proved, and there’s another sense in which many mathematicians understand a proof, and those two senses aren’t the same. For many mathematicians, a statement isn’t proved when a physical process (which might be a classical computation, a quantum computation, an interactive protocol, or something else) terminates saying that it’s been proved – however good the reasons might be to believe that physical process is reliable. Rather, the statement is proved when they (the mathematicians) feel that their minds can directly perceive its truth.“

This is close to the intuitionist view of mathematics that I explained earlier and therefore suffers from its inherent vagueness. Scott Aaronson knows that many mathematicians do not feel like the theorem has been proven, but he does not know why. He correctly observers that the main concern is not the reliability of the proof, but he fails to grasp the difference between the proof of the four-color theorem and other more satisfying proofs: the proof by brute force does not explain much if anything.

The proof does not contribute significantly towards our understanding of mathematics (besides all the impressive methods that were involved in reducing the search space). It does not explain why the theorem is true in any deep sense, and if understanding and explaining mathematical structures is the goal of mathematicians, it is only natural that mathematicians should feel disappointed. When mathematicians say they are searching for a more “satisfying“ proof, what they really mean is that they are searching for a better explanation; an explanation, that if deep enough, contributes not only to our understanding of this particular branch of mathematics but also to our understanding of many others parts.

Physics

A consequence of understanding that mathematical proofs are physical processes is that the validity of a given mathematical proof depends on our knowledge of the physical laws. For if we are wrong about how the computer (or ink and paper) works, we would also be wrong about the validity of the proof. We might be sure that ink and paper (or a computer) work the way we think they do, but we can never be certain.

This is a very important point to understand, and it also clears up a mystery from the beginning of this essay concerning the use of the word "argument" in a mathematical context. For all that mathematicians do, is provide an argument—an explanation—for, say, a theorem being true. They try to explain why it has to be one way and not the other. The difference to other sciences lies in the fact that these arguments are, due to their formal nature (even informal mathematical arguments are formal compared with all the other sciences) very rigorous. This is why in mathematics, gaps and errors in explanations are easier to find. Misunderstandings are also less likely because all the terms are rigorously defined. All of these points give rise to the illusion of certainty. It is, nonetheless, false to believe that mathematical knowledge is certain.

Computer Proofs

It is ironic that in the past, mathematicians hoped to formalize mathematics to the extent that computers could do it by applying mechanical rules. However, now that computers help with proofs, some mathematicians don't accept these "computer-assisted proofs" based on some of the objections I have raised above. They claim that we cannot be certain that these proof are correct because we cannot be certain how a computer works—and they are right. However, as I have explained, they are wrong to think that normal proofs provide us with certainty.

Other people want to use more computers and are therefore calling for more "standards" in computer proofs, to decide which are valid. This is the same mistake that underlies proof theory. There is no way to classify all valid methods of proofs because new ones will be invented. It is, of course, necessary to discuss the validity of specific methods of computer proofs (as with any other "normal proof"), but it is not possible to reject a proof because it is not on your list of "valid proofs," since such a list is always incomplete.

Not Relativism

The picture of mathematics as a discipline where fallible people invent explanations should not be confused with relativism. Relativism is the belief that no objective truth (or reality) exists, and everything is subjective. This view is as false in mathematics as in every other science. Just as there exist objective fundamental physical laws that we don't know, there exist objective mathematical truths that we do not know. In both cases, we do not have direct access to these truths, but we try to guess them. We creatively come up with explanations and criticize them. In Physics, these explanations then get tested with experiments; in mathematics, this job is done by proofs.

We do not have time to get into the ontology of mathematics here (that would be a whole separate essay), so for now, you have to accept that mathematical objects are just as "real" as any other object. They "kick back," as David Deutsch puts it. The only difference is that they are, obviously, not physical. However, this is a smaller difference than you might think.

The view that there exists an objective mathematical reality independently from us humans is called Platonism. However, most Platonists believe that we—or certainly mathematicians—can somehow get access to this world where these objective truths lie (e.g., through a special kind of intuition). This is, as I have explained, false. Mathematicians do nothing mystical when they do mathematics. They conjure up explanations like every other person.

This also answers the famous question: is mathematics invented or discovered? My preferred answer is neither. It exists independently of us humans, and we know about it, as we know about anything else, through explanations. But if I were forced to stay in this framework of "invented or discovered," the answer would be that mathematics is discovered by methods we invent. However, one has to be careful here not to assume—as most people unfortunately do—that "discovery" or our methods are infallible. What we have "discovered" can always turn out to be wrong or incomplete.

As mentioned above, a different way to look at it is that mathematicians study necessary truths, but do not deliver them. Necessary truths are things that are true by their definition. If you define a triangle as having three sides, you cannot afterward say that it has 4 sides. It is logically not possible to say a triangle has 4 sides. The same is true for 2+2=4, which recently gained popularity because some people tried to deny this fact. By the very nature, our numbers are defined, 2+2=4. Sure, we can use different symbols for 2 and 4—let's say 1 and 5—and construct the true statement 1+1=5. This does not, however, change anything in the mathematical reality. The mathematical objects that we called 2 and 4 are now called 1 and 5, but that is just semantics.

It is also true that there are contexts where you can say 2+2=5 or something similar like 23+2=1 (if you talk about a clock). These statements do not show anything besides that the symbols that we use to represent natural numbers (abstract mathematical objects) can be used to represent different things. The symbol 2 is not the mathematical object "2," it is our physical representation of that object. We can use this symbol in different contexts, but this does not change 2+2=4. The fact that, as I have argued, we can never be 100% certain about this does not mean 2+2=5. It just means that one day, we could discover that, for example, we live in a simulation and that in the "real world," the mathematical object we mean by 2 behaves differently than we thought it behaves. Even though mathematics is the study of absolute truths, this does not make our knowledge of it certain.

Consequences

At last, I want to discuss some practical consequences of our epistemological discussion because, as you know, "Philosophy without practicality is like science without rigor."

I have already mentioned one of the most important consequences: the validity of a mathematical proof depends on our knowledge of the laws of physics. Our mathematical knowledge is, therefore, not independent of physics.

The second important consequence inside of mathematics has to do with the acceptance of proofs. For example, people used to think that imaginary numbers are less "real" than real numbers,s and the terminology still reflects this suspicion. (This is, again, an ontological discussion, but (spoiler alert) imaginary numbers are just as "real" as real numbers—some would argue they are even more real) As a result, proofs that used imaginary numbers - e.g., about the distribution of prime numbers - were unfortunately not accepted by all mathematicians. The same happens if you accept the intuitionist philosophy of mathematics and deny the infinite. This is bad for the field in general—and therefore for everyone—because it slows down progress.

Another consequence of the epistemological viewpoint that I have explained in this essay is the fact that what we can know about mathematics depends on our (real) physical laws. If you accept that a mathematical proof is a physical process governed purely by physical laws, it follows that what proofs you can perform depends on the (real) laws of physics. Our knowledge about physical laws is irrelevant here, but the true laws of physics constrain what functions your brain can perform and, therefore, what proof we can come up with. This means that some theorems that are provably unprovable in our universe can be provable in a universe with different physical laws—and vice versa. You have to be careful here because only what we can know about mathematics depends on our laws of physics. There, nonetheless, exists an objective mathematical reality, which is completely independent of us and our laws of physics.

Conclusions

Mathematical knowledge is not certain. It depends, like all of our knowledge, on explanations, and we can always be mistaken about the validity of those explanations. What we can possibly prove about abstract mathematical objects depends on our laws of physics, while the validity of a specific proof depends on our fallible knowledge of those laws. There nonetheless exists an objective mathematical reality out there, which is independent of us and of our physical laws. However, we do not have direct access to this reality and, therefore, can never be certain of our knowledge.


The title of this post is taken from the eponymous book by Reuben Hersh, which does a great job demonstrating that the conventional view of mathematics is wrong. Most of the ideas in this essay are based on my readings of David Deutsch whom I cannot thank enough for his books.