Unfashionable

For some things Harvard suffices; this blog is for the rest.

Predicting Future Knowledge

It would be great if we could predict what insights and discoveries we will make in the future, but unfortunately, it is a fundamental epistemological truth that we cannot. If we were capable of predicting what knowledge we will create in the future, we would thereby create it in the present. In a perfect marketplace of ideas, that is all we could say on the topic. But as I have hinted at in my latest piece A Skeptical Perspective on Quantum Computing and discussed in some detail in The Truth Will Cost You Dearly our marketplace of ideas is far from perfect. Sometimes, by looking at the incentives at play, we can guess where scientists make mistakes (or lie) without looking at a specific paper or study and therefore predict where future findings might lie.

To illustrate this idea, let’s consider a straightforward example. Suppose ExxonMobil sponsors a research study on the environmental impact of oil spills; they even come up with a novel method to analyze the issue. As expected, the study concludes that oil spills are less harmful than commonly believed. Even without examining the details of the research and assessing the new method, we have reasons to question the conclusion. Our skepticism is based on the simple fact that ExxonMobil has a vested interest in minimizing the damages caused by oil spills.

Hence, if we allow a group of independent scientists to investigate the same question for a few years, it is likely that they would uncover evidence indicating that the damages from oil spills are far more severe than what the ExxonMobil-sponsored study portrayed. While we cannot predict the specific flaws that these independent scientists will identify, we can anticipate the general direction of the knowledge they will produce. To put it another way, it would be highly unlikely for independent scientists to conclude that oil spills are less harmful than the ExxonMobil study suggests. It’s not impossible, but improbable.

Although many people agree with my argument, the majority seem to have neglected to apply this principle in the context of Covid-19 vaccinations. When these vaccinations were initially introduced and mandated in various parts of society, – in Austria, where I reside, they passed a law mandating them for everyone – only limited information was accessible. Some would have preferred to wait for additional data before deciding, but due to various reasons, most of them political, that was not possible.

It’s simple to apply our lesson from above here: when Pfizer declares that their new vaccine is, say, 90% effective, you should exhibit the same level of skepticism as when ExxonMobil denies the severity of oil spills. It is important to note that in this case, there was not only a monetary incentive that could have potentially biased scientists, but also significant political pressure. We would expect that if other scientists continue to investigate the same issue for a few years, they will likely find that the efficacy is lower. And that is precisely what has happened. For instance, contrary to claims at the time, it’s now accepted that the vaccine doesn’t prevent transmission. These specifics are impossible to predict, but one could have predicted the direction quite easily by looking at the incentives.

As these examples illustrate, this way of making educated guesses about the future growth of knowledge is most useful outside of formal science. In academia, you can simply wait for more data before making a decision. In real life, like in the case of the vaccine, you often don’t have the luxury of time, so heuristics become important. Especially when faced with a dilemma where the current data fails to provide a clear answer, predicting which side future information will support can prove to be a valuable tool.

As it turns out, Munger’s famous quote, “Show me the incentive and I will show you the outcome” also applies to science, and if you can identify the incentive, you can adjust for it.