For some things Harvard suffices; this blog is for the rest.

Why Nuclear War Probabilities Are Useless

Due to Russia invading Ukraine, people seem to be more worried than usual about a nuclear war breaking out. I agree that it is definitely more likely that a nuclear war will happen tomorrow than it was, say, 4 months ago. However, the way people rationalize their fear is what I take issue with.

Their argument goes as follows. There is a certain probability (now higher because of the war between Russia and Ukraine) that a nuclear war will break out in any given year or on any given day. If we now iterate over enough years (or days) we will get a nuclear war with near certainty at some point. In other words, the cumulative probability of a nuclear war approaches 100% over time. The Gambler’s Ruin gets often used as an analogy for this scenario.

Here are two Tweets by Balaji as examples for what I am talking about, but rest assured that a lot of people have been arguing this way:


Alex Tabarrok recently published a short post talking about a survey of experts that estimate the annual probability of a nuclear war at 1.17%. Alex states:

For a child born today (say 75 year life expectancy) these probabilities (.0117) suggest that the chance of a nuclear war in their lifetime is nearly 60%, (1-(1-.0117)^75).

That is a very precise estimate. One is tempted to ask these experts why the probability is not closer to, say, 0.0121, but that is beside the point. Even Alex readily admits that these estimates are very unreliable by quoting the survey:

We shouldn’t put too much weight on these estimates, as each of the data points feeding into those estimates come with serious limitations.

The probability mentioned above is made up of expert guesses and “historical frequency.” I think discussing the latter is unnecessary as everyone should agree that war is the last topic we should expect the past to predict anything about the future. And for expert guesses, they are just that: guesses. We have, after all, not much evidence to look at.

This leads to a simple rule: for rare events, it is impossible to make a convincing case that one event is very unlikely (1 in 100.000), but not extremely unlikely (1 in 1.000.000) because, by the nature of rare events, there is not much data to base your estimates off.

Note also that if the probability estimate were correct, we would be lucky to be alive today. This might well be true, but I implore the reader to consider a different explanation: probabilities and the dice rolling analogy are inaccurate and useless when trying to predict these invents.


The deeper problem with trying to estimate these high impact low probability events is hinted at by Nick Szabo in his essay Pascal’s scams:

Poor evidence leads to what James Franklin calls "low-weight probabilities", which lack robustness to new evidence. When the evidence is poor, and thus robustness of probabilities is lacking, then it is likely that "a small amount of further evidence would substantially change the probability. "

Due to the very nature of rare events, we do not have much data about them and not much confidence in our estimates. New evidence can therefore shift our probabilities drastically.

If you would ask the expert of the 2019 survey cited above about their probabilities again today, we would naturally get higher estimates, and they would be “correct.” But the same thing could happen in the other direction: Russia conquers Ukraine, sanctions are imposed, and the military situation calms down. Now the probability of a nuclear war is lower.

As you can see, not only is your current probability estimate unreliable to the point of uselessness, but you cannot make any meaningful predictions about the future because new evidence will drastically alter your estimate. This deep epistemological limitation is a consequence of the inherent unpredictability of the future and unfortunately cannot be overcome.

Put differently, Putin does not roll a dice every morning and launches a nuclear missile if a six shows up. Even he himself cannot meaningfully answer the question “What’s the probability that Russia will launch nuclear missiles 6 months from now” unless his answer is 100%. It depends on many factors, like how NATO reacts and how strong the Ukraine military is, none of which can be captured by probabilities.

The dice rolling analogy makes it seem like we cannot influence the future and our actions are independent of the “probabilities”; it makes it seem like at some point we are going to blow ourselves up anyway, no matter how low the probability on any given day. That is simply wrong and the worst kind of pessimism in the Deutschian sense.


The future is shaped by what we humans do today, which cannot be captured by probabilities or predicted to any significant degree – especially for rare events. If we are alive 500 years from now, it does not mean the probability estimates were wrong, but that we have successfully managed to negotiate a solution without nuclear war. Putting a probability on that is simply meaningless.

More precisely, the probability expresses nothing fundamental about the situation; rather, the probability expresses the expert’s uncertainty and lack of information about the future. For deterministic high probability events, this may still be useful, but otherwise, it becomes a useless exercise.

Let us look at a simpler example, weather forecasting, to better understand what probability even means in our context. When we say, “There is a 75% chance it will rain tomorrow in NYC,” that probability does not correspond to anything in nature. In other words, the weather is not unsure if it is going to rain tomorrow; it either is or is not going to rain, but it is not up to chance – it is already “fixed” because the weather is fully deterministic – chaos theory notwithstanding. We just do not know it. The probability of 75% concerns our uncertainty and our knowledge about the system, not the system itself. This might seem like a small semantic quibble but it is not. We could calculate a different and more precise probability for the rain tomorrow if we had a better model or better input data even though the weather itself remains unchanged.

When predicting nuclear war, the situation is even worse because humans are involved and they are neither deterministic nor predictable. (I am trying to avoid a philosophical discussion about free will here as it is irrelevant to my argument. Even if people had no free will and everything was deterministic, we could not predict any significant action or decision, and it would therefore be as if people had free will.)

When people talk about the probability of nuclear war they’re talking about their knowledge and uncertainty of an outcome that is, in contrast to weather forecasting, not fixed, but completely open. The estimated probability is therefore simply useless – or in other words, bullshit.


Realizing that we can shape the future and it is not nearly certain that we blow ourselves up at some point because of “the nuclear gambler’s ruin” is a hopeful and deeply optimistic message in the Deutschian sense. It does not mean we will prevent nuclear war; it simply means that if we take the right action we can stage off a disaster indefinitely, and I hope we do.