A Skeptical Perspective on Quantum Computing
The concept of a quantum computer (QC) was first conceived by physicist Paul Benioff in 1980. The following year, Richard Feynman, frustrated by the limitations of classical computers in simulating quantum systems, independently came up with a similar idea. He famously stated, “Nature isn’t classical, dammit, and if you want to make a simulation of Nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” In 1985, David Deutsch took these vague ideas and formalized them in his paper Quantum theory, the Church-Turing principle and the universal quantum computer.
Recently, as just one of many such articles, the New Yorker claimed that this hypothetical device “could open new frontiers in mathematics, revolutionizing our idea of what it means to “compute.” It “could spur the development of new industrial chemicals, addressing the problems of climate change and food scarcity.” And “it could reconcile the elegant theories of Albert Einstein with the unruly microverse of particle physics, enabling discoveries about space and time.”
The Biden administration too is convinced by the power of a QC, and sees the need to act before it’s too late: “Earlier this year, the Biden Administration announced that it was moving toward new, quantum-proof encryption standards that offer protection from Shor’s algorithm. Implementing them is expected to take more than a decade and cost tens of billions of dollars, creating a bonanza for cybersecurity experts.”
The excitement for the potential of this new technology extends beyond the media and public sector and into the private sector as well. Jeremy O’Brien, the C.E.O. of PsiQuantum says that “The impact of quantum computing is going to be more profound than any technology to date.” This sentiment is echoed by leading technology companies such as IBM, Microsoft, and Google, who are all actively competing to develop the first functional quantum computer.
Classical vs Quantum
A classical computer works by manipulating many tiny transistors, essentially on/off switches, that are called bits. A computer with N transistors can therefore be in one of 2N possible states.
A quantum computer uses quantum bits (qubits) that also have two main states. The simplest example of a qubit is an electron’s angular momentum or spin. For any coordinate axis, this spin can only be +1/2 or -1/2. The two states of an electron’s spin in regard to a specific axis are called spin up (↑) and spin down (↓).
Unlike a classical bit, the qubit can be in ↑ and ↓ “at the same time.” This state is called superposition. Mathematically, a qubit is described by the following equation: ψ = a|1⟩ + b|0⟩. The amplitudes a and b, also called α and β, are complex numbers and correspond to the probabilities of finding the qubit in either state. It may be, for example, 80% ↓ and 20% ↑ .
In short, a qubit is always in a superposition of ↓ and ↑, but when measured, it will be in either one. Physicists call this process of observing the qubit and destroying the superposition – that is, forcing the qubit to decide on ↑ or ↓ – the collapse of the wave function.
A system with two qubits has 4 basic states: (↑↑), (↑↓), (↓↑), and (↓↓). The state of an N qubit system can therefore be described by 2N complex numbers (the probabilities or quantum amplitudes). The distinguishing feature of a quantum computer and what grants it theoretically its immense power in contrast to a classical computer lies in the following. A conventional computer with N bits can only be in one of its 2N possible states, while a quantum computer must be described by all the 2N quantum amplitude. In other words, a classical computer with two bits can be described by two numbers (one for each bit) while a quantum computer with two qubits must be described by 4 complex numbers (one probability for each of the 4 basic states mention above).
As a consequence, the information contained inside a quantum computer gets inconceivably large very quickly. For example, it is estimated that for a useful QC, one would need around 10,000 qubits. But to describe the state of even 1000 qubits we would need 2100 or 10300 complex numbers. That’s greater than the number of particles in our observable universe. It’s where the hypothetical power of a quantum computer comes from, but it might also be the reason it will never work.
Building A Quantum Computer
Operating a quantum computer involves manipulating a vast number of continuous parameters, the above-mentioned quantum amplitudes, through quantum gates. Put differently, the classical equivalent to a QC is not a digital computer, but an analog computer. We are not manipulating discrete on/of states, but continuous amplitudes. As one might imagine, this means error correction is the most difficult and important problem when building and operating a quantum computer.
Error correction in classical computers is relatively straightforward due to their discrete nature. However, in quantum computers, which must manipulate an astronomical number of continuous variables, error correction poses an immense challenge. Worse, qubits are extremely fragile and lose their particular state over time, a process known as decoherence. To prevent that, the quantum computer needs to be shielded from all outside influences. We can also never observe qubits without making them useless by collapsing the wave function. All this taken together makes error correction an insanely difficult problem to solve. In fact, nobody knows how to do it.
To address the issue of error correction, researchers came up with the idea of using multiple real qubits to create a logical qubit. Moreover, they “proved” the Threshold theorem which states that if the error rate is below a certain threshold, arbitrary long computations become possible by adding more and more real qubits. This leads to the need for more and more physical qubits – around a million – to simulate enough logical qubits – around 100 – so that a useful computation can be performed. However, there is a significant disparity between the sophisticated theories around error correction, which often involve millions of qubits, and the rudimentary experiments currently being conducted.
The gap between theory and experiments in quantum computing is best explained by the fact that a QC, being a physical device, can never fulfill any requirement exactly. No continuous variable can be, for example, exactly zero, or an angel exactly 45°. There will always be an error as Dyakonov explains in his 2014 paper:
However, in the physical world none of these assumptions can be fulfilled exactly. Any assumption can be only approached with some limited precision. So, the rather meaningless “error per qubit per gate” threshold must be supplemented by a list of the precisions with which all assumptions behind the threshold theorem should hold. Such a list still does not exist. The theory also seems to ignore the undesired free evolution of the quantum computer caused by the energy differences of quantum states entering any given superposition. Another important point is that the hypothetical quantum computer will be a system of 103 –106 qubits PLUS an extremely complex and monstrously sophisticated classical apparatus. This huge and strongly nonlinear system will generally exhibit instabilities and chaotic behavior.
A qubit can never be only ↑ every time we measure it. There must always be a small, non-zero, probability that we measure it as ↓. In short, the physical world is messy, especially on the tiny scales a quantum computer operates at.
We could go down this rabbit hole endlessly, but I do not want to bore the reader with too much quantum mechanics. Let us just look at one example to see how this problem manifests in real quantum computing papers. For this purpose, Decoherence by Correlated Noise and Quantum Error Correction by E. Novais and Harold U. Baranger (2006) should suffice. We are not concerned with their broader argument around correlated noise but only with their assumptions:
To follow the long time behavior of the computer, we remove non-essential elements and assume: (1) Quantum gates are perfect and operate much more quickly than the characteristic response of the environment. (2) States of the computer can be prepared with no errors. (3) Thermal fluctuations are suppressed. Finally, for clarity in the spin-boson example, we consider ohmic coupling between the environment and the qubits.
Even without any knowledge of quantum physics, it should be apparent that these assumptions are wrong – that is, they can never be fulfilled exactly – for any real physical device.
The Future
I consider the following scenario proposed by Levin in his paper The Tale of One-Way Functions a likely future for quantum computing:
QC proponents often say they win either way, by making a working QC or by finding a correction to Quantum Mechanics; e.g., Peter Shor said: “If there are nonlinearities in quantum mechanics which are detectable by watching quantum computers fail, physicists will be VERY interested (I would expect a Nobel prize for conclusive evidence of this).”
Consider, however, this scenario. With few q-bits, QC is eventually made to work. The progress stops, though, long before QC factoring starts competing with pencils. The QC people then demand some noble prize for the correction to the Quantum Mechanics. But the committee wants more specifics than simply a nonworking machine, so something like observing the state of the QC is needed. Then they find the Universe too small for observing individual states of the needed dimensions and accuracy. (Raising sufficient funds to compete with pencil factoring may justify a Nobel Prize in Economics.)
Of course, it’s not possible to definitively prove that building a quantum computer is impossible – and I still hope it turns out to be possible – or that any specific problem will not be solved by a QC in the future, but Dyakonov highlights the absurdity of this line of argument through a humorous story in his book Will We Ever Have a Quantum Computer?:
The QC story says a lot about human nature, the scientific community, and the society as a whole, so it deserves profound psycho-sociological studies, which should begin right now, while the main actors are still alive and can be questioned.
A somewhat similar story can be traced back to the 13th century when Hodja made a proposal to teach his donkey to read and obtained a 10-year grant from the local Sultan.For his first report he put breadcrumbs between the pages of a big book, and demonstrated the donkey turning the pages with his hoofs. Although no scientific publications followed, this was a promising first step in the right direction.
Nasreddin was a wise but simple man, so when asked by friends how he hopes to achieve his goal, he answered: “My dear fellows, before ten years are up, either I will die or the Sultan will die. Or else, the donkey will die.”
Had he the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.
Second, he could say that his research may, with some modifications, be generalized to other animals, like goats and sheep, as well as to insects, like ants, gnats, and flies, and this will have a tremendous potential for improving national security: these beasts could easily cross the enemy lines, read the secret plans, and report them back to us.
As an aside, I assume someone is going to ask me about the new paper Factoring integers with sublinear resources on a superconducting quantum processor by a group of Chinese researchers. Many people seem to be impressed and believe it constitutes some form of progress. In fact, it’s basically scientific malpractice. One of the last sentences in the paper exposes the entire exercise as a useless farce: “It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.” Translation: “There is basically no way there is a speed up here, but nobody has proven that there isn’t, so we call it unclear."
In short, the paper is dishonest nonsense; or, as Scott Aaronson, one of the most honest researchers and author of the great book Quantum Computing since Democritus, has put it in his post about the paper:
It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so. All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.
All of our current noisy intermediate-scale quantum computers (NISQ) use various tricks to factor small numbers. Because we know the answer beforehand, researchers use this knowledge when designing experiments. This is called the “compiled” version of Shor’s algorithm, but more honestly should be called the shortcut version. Using knowledge of the outcome when designing an experiment strikes me as incredibly dishonest, especially when not properly disclosed. In addition, Smolin et al. showed that by using the answer, one can simplify the factoring algorithm for any number no matter how large to the circuit equivalent of flipping a coin.
It’s worth repeating that scaling up these noisy (non-error corrected) quantum computers is proving to be insanely difficult. This is primarily due to the exponential increase in the number of parameters that must be accounted for with each added qubit. For instance, a quantum computer with just 5 qubits requires consideration of 25 = 32 parameters, but with 50 qubits – the target set by IARPA for 2012 – the number of parameters jumps to an astronomically high 250 = 1,125,899,906,842,624.
Gaming The Funding System
The picture presented above stands in stark contrast to what is often portrayed in the media or communicated by many experts in the field. This discrepancy can be explained by a combination of selection effect and incentives.
Individually, a researcher has the incentive to present his area of expertise as important and as promising as possible. That’s why someone told the New Yorker that a quantum computer could address “the problems of climate change and food scarcity.” Everyone who knows anything about quantum computers knows that this claim is ridiculous. The main use of a quantum computer will be the one Feynman mentioned in 1981: the simulation of a quantum system. However, promising to solve problems like climate change and food scarcity attracts grant funding, so that’s what some experts say.
This leads to the misallocation of grant money to anyone willing to argue that their research field can solve these types of problems (I assume pandemic modeling will join the list going forward).
Additionally, esteemed experts in a field are never overly critical of it. If they were, they would be working in a different field. The culture of academia in combination with the politics that are a result of the grant process discourages criticism of peers’ work, leading to situations where questionable papers, like the previously mentioned Chinese paper, can pass through the “peer-review” process without receiving the necessary scrutiny. Only afterward have the few honest researchers like Scott Aaronson the opportunity to say, “No. Just No.”
These existing incentives get amplified by the selection effect. When journalists from the New Yorker write a piece about quantum computing, they typically consult renowned experts in the field and only include their most outlandish predictions. This leads to a biased representation of the field and makes it rare for a critical voice to appear in a mainstream publication. Although it’s worth noting that the New Yorker piece did include a brief quote from a Chinese researcher that hinted at a more skeptical perspective:
“We see fairly few quantum algorithms where there is proof of exponential speedup,” he said. “In many cases, it’s not clear that it wouldn’t be better to use a regular computer.”
Note how he places the burden of proof for a speedup correctly on the researchers. In the Chinese paper we discussed above, the researchers used “unclear” in the sense that nobody had proven that there is no speedup, so it was therefore it’s reasonable to assume there is. They put the burden of proof on everyone else. It should go without saying that that’s a ridiculous and dishonest way to do science.
It is not just academic grant committees who are being misled by quantum computing experts, but private investors as well. In addition to the prominent players such as IBM, Google, and Microsoft, there are numerous smaller startups operating in the field that have received venture funding. Based on the information presented in this essay, it should be clear that these companies won’t produce any value in the near term. (I might publish some more detailed research about a few public quantum computing companies one could short. In general, shorting overhyped science seems to be an underserved market in the hedge fund world.)
Nonetheless, venture capitalists continue to invest in the hype surrounding quantum computing, writing articles that start with paragraphs like this:
The development in quantum computing is only speeding up, and since my last dive into the space at the beginning of last year, I’ve met many more researchers, customers, and companies working on the space. This only deepened my focus on near-term quantum, so in this piece I narrowed in on:Hybrid quantum-classical algorithms and their promising applications to drug-developmentThe kinds of business models massive or venture-scale companies will use to employ near-term quantum
To be clear, there is no need to focus on “near-term quantum” because, for any practical problem, a laptop will comfortably outperform any quantum computer for the foreseeable future.
Despite claims to the contrary, quantum computing may not be as valuable in practical real-world applications as investors think. Even computational chemistry, which is arguably the best example of a potential real-world application, seems to profit far less from a quantum computer than previously thought (see, Elfving et al.). In other instances, a better classical algorithm is often found that basically eliminates the theoretical quantum speedup as was the case for the “recommendation problem.”
Quickly skimming the companies that are listed in the article “based on the claims they make and some digging into technical papers they’ve published” we find Kuano and Polarisqb. The website of Kuano claims they are “Redefining drug discovery with Quantum and AI” while Polaris apparently focuses on “Drug blueprints to treat and cure all diseases for all people.” This is beyond parody.
All this does not suggest that efforts to build a quantum computer should come to a halt. But the endeavor should be approached as a moonshot scientific experiment that may yield limited practical applications even if successful. Investors should think of it more like the Large Hadron Collider 2.0, than a classical computer 2.0. That is, they should currently not invest in it at all if their aim is to turn a profit.
All these overly positive articles would do well to start off with Rolf Landauer’s famous footnote:
This proposal, like all proposals for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.