The Religion of Science And Its Consequences
One curious fact about our era is that science has become a de facto religion, with scientists serving as clergy. I was reminded of this recently while reading Unsettled, in which author Steve Koonin quotes the first paragraph of the “How We Respond” report about climate change by the American Association for the Advancement of Science (AAAS):
Our nation, our states, our cities and our towns face an urgent problem: climate change. Americans are already feeling its effects and will continue to do so in the coming decades. Rising temperatures will impact farmers in their fields and transit riders in cities. Across the country, extreme weather events such as hurricanes, floods, wildfires and drought are occurring with greater frequency and intensity. While these problems pose numerous risks to society and the planet, undoubtedly the biggest risk would be to do nothing. Science tells us that the sooner we respond to climate change, the lower the risks and the costs will be in the future.
Rather than focusing on the object-level discussion of climate change, I think it will prove more interesting to scrutinize how “Science” is used in the last sentence.
The Meaning of “Science”
The sentence would be straightforward if the authors had used “Scientists” instead:
Scientists tell us that the sooner we respond to climate change, the lower the risks and the costs will be in the future.
The meaning of this sentence is obvious: A group of people is expressing an opinion about the need to act in the face of climate change. But what exactly does the original sentence mean?
While one could interpret “Science” as referring to the epistemological concept of methodologies employed to discern truths, this seems incongruous in our particular context. Instead, “Science” seems to serve as a shorthand for a collective body of research: observations, experiments, and models related to climate change. This body of research presumably “tells us that the sooner we respond to climate change, the lower the risks and the costs will be in the future.”
Additionally, “Science” as used here goes beyond the descriptive by implying an ethical obligation to act in a certain way. This violates Hume’s is/ought distinction providing “Science” with a religious character. Blurring the distinction between science and religion (between descriptive and normative claims) leads to the “Believe The Science” disaster observed over the last few years.
Linguistically, “Science” is presented as the acting entity in the sentence. Rewriting the beginning of the sentence, “Science tells us” as “We have been told by Science” highlights how strange and ridiculous it is to have “Science” as the main actor (although, note, that it makes sense if you replace “Science” with “God”).
The autonomous agency attributed to this research disregards the epistemological fact that all data has to be interpreted by scientists. For example, the validity of the models in this collective body of research can only be judged by scientists who may well be mistaken. The models as such “tell” us nothing.
The phrasing was nonetheless chosen because scientists don’t see themselves as regular (well-informed) people expressing their opinions. They think “Science” speaks through them. That’s how to make sense of the strange linguistic observation above. Since all agency is attributed to “The Science,” scientists appear to not have any. Consequently, they can’t be blamed for any mistakes.
The way “Science” is used does something else important: it implies a consensus within a group of scientists. Presumably, every “climate change expert” agrees with the facts as stated in the paragraph.
Thus, utilizing the term “Science” accomplishes four distinct objectives: (1) it serves as an encapsulating terminology for a corpus of climate research, (2) implies an ethical component, (3) denies scientists all agency by transferring it to “The Science” and (4) suggests an epistemic consensus amongst scientists.
Predicting The Future
Technological progress is unpredictable. Particularly, “the sooner we respond to climate change, the lower the future costs” ignores the possibility of inventions like cheap carbon capture, which would make early action wasteful or even counterproductive. Due to fundamental epistemological limitations, we can’t know what technologies we will invent in the future. Still, the possibility means that we could be better off not trying to mitigate climate change until we make such an invention. Therefore “the sooner we respond to climate change, the lower the risks and the costs will be in the future.” could turn out false even if climate scientist are 100% correct about their models.
Such a statement only makes sense together with the assumption that no new groundbreaking technologies will be invented. This assumption may well be correct – although I doubt it – but it cannot be substantiated by evidence. In other words, it cannot be part of the scientific body of research regarding climate change.
This is not just some philosophical (or worse, linguistic) quibble. Understanding that responses to climate change can increase the risks to our civilization as I argued 3 years ago in Climate Change and the Risk of Stagnation is vital to the discussion:
New knowledge and new technologies give us the best chance of surviving the obstacles we do not yet know. This is the risk argument for fast progress. Slowing down the progress of society — in order to reduce CO² emissions — decreases the risk caused by climate change, but it increases the risk of future extinction by some — in the present — unknown cause. This constrains the appropriate measures we should take to reduce CO² emissions.
Worse, reducing CO2 emissions might turn out to be counterproductive and result in an increased risk posed by climate change because we might not invent certain technologies that are needed to “solve” climate change.
Even a selective reduction of energy consumption to combat climate change overlooks the complexities involved, as previously unrelated fields might prove useful in the future. Consider artificial intelligence, which, while consuming considerable amounts of energy, holds immense potential for groundbreaking innovations in many fields.
My argument should not be misconstrued as a wholesale dismissal of the need to reduce CO2 emissions or take other actions. Rather, the point is that the epistemic boundaries of science do not extend to predictive certainties about the future. Consequently, the complex trade-offs between CO2 reduction and the potential hindrance of future progress defy quantitative or “scientific” resolution. Such a decision necessitates the employment of a human faculty we assumed to have banished from the political realm a long time ago: judgment.
Science and Government by Steam
The fallacy of thinking “Science” could tell us what to do was already identified in 1956 by William Whyte in The Organization Man:
As in other such suggested projects, the scientific elite is not supposed to give orders. Yet there runs through all of them a clear notion that questions of policy can be made somewhat nonpartisan by the application of science. There seems little recognition that the contributions of social science to policy-making can never go beyond staff work. Policy can never be scientific, and any social scientist who has risen to an administrative position has learned this quickly enough. Opinion, values, and debate are the heart of policy, and while fact can narrow down the realm of debate, it can do no more. And what a terrible world it would be, Hell is no less hell for being antiseptic.
Trying to eliminate human judgment from the decision-making processes doesn't render those decisions "scientific"; it renders them opaque and absolves the decision-makers of accountability.
In contemporary governance, precisely that mechanism ensures that culpability remains elusive because politicians seemingly relinquish making decisions and instead “follow the Science.” Scientists, for their part, also disclaim responsibility by presenting themselves, as we have discussed, as mere conduits of “The Science.” When “The Science” changes, as it is wont to do, what scientists say changes as well. It’s a mechanical process in which nobody is at fault when something goes wrong.
Covid-19 provided ample examples of this intricate and fascinating mechanism. Take masks, for instance. Initially dismissed, their usage became mandated, only for the New York Times to later declare, “The Mask Mandates Did Nothing.” Interestingly, the science itself hasn’t wavered. The NYT article draws from a meta-analysis titled “Physical interventions to interrupt or reduce the spread of respiratory viruses,” which primarily leans on pre-Covid studies. In essence, the data (and the consensus!) existed well before the pandemic; what changed was not the scientific research but the political climate.
In reality, Science can’t be made devoid of human judgment any more than politics can. There is no mechanical process that, if followed, spits out Truth or good government. As Carlyle so aptly observed in Chartism:
Statistics is a science which ought to be honourable, the basis of many most important sciences; but it is not to be carried on by steam, this science, any more than others are; a wise head is requisite for carrying it on.
One may build an elaborate mechanical process, as we surely have with all the grants, journals, and peer review, but that process won’t produce truth if the people are not exercising sound judgment in the pursuit of it.
Reforms aimed at academic institutions usually set their sights on procedural changes, operating under the belief that tweaking the machine can fix its output. Yet, what remains largely unaddressed is that many scientists are less interested in the truth and more in worldly pursuits: grants, promotions, status, and fame. When these align with truth, it's serendipitous; when they don't, truth is often the casualty.
Mechanical Solutions
Some of my favorite ideas for fixing academia come from Balaji, who advocates for what he calls Cryptoscience. His approach involves putting the data and code used to produce the research paper (à la reproducible research) onto a blockchain, which would then turn citations into import statements, creating a system of composable science. While he doesn’t delve into how to implement these ideas in our current academic system – largely because that’s impossible – the even larger issue is that they don’t address the core human problem highlighted above.
A great example of a mechanical solution gone wrong is P-hacking. On the face of it, requiring statistical analysis of data to show a significance of p < 0.05 seems like a great idea. Nobody could simply claim an effect without making sure the effect was real and not just luck or randomness. Verily quickly, this became a disaster as predicted by Goodhart’s law: When a measure becomes a target, it ceases to be a good measure.
It has now become a game of how to get a statistically significant result out of the dataset to publish a paper. This is usually done through such means as selectively reporting outcomes, tweaking sample sizes, or adjusting the statistical tests after seeing the data. Worse, most mechanisms employed for p-hacking are potentially legitimate decisions that can be defended as sound scientific practices. It's how and when these decisions are made that is problematic, making p-hacking very difficult to prove in individual cases.
Any mechanical fix will share the same fate because you can‘t produce reliable, honest science without reliable, honest scientists. There exists no substitute for judgement and integrity. If you have those, the proper methodologies will naturally emerge. The prevailing focus on codifying and mandating specific methodologies represents a form of cargo cult science: We see good scientists employing these methods, so we make them mandatory, assuming it will turn everyone into a good scientist. However, this gets the causality wrong. The end result is not good science, but bad science masquerading as good—akin to wooden planes that can't fly.
Scientists already have the option – and arguably the obligation – to make their research as open and transparent as possible. They already should be publishing their data and uploading their code, but for some reason, many don’t. Ian Hussey confirmed again in his 2023 paper that data is not available upon request:
Only 25% of articles’ authors actually shared data upon request. Among articles stating that data was available upon request, only 17% shared data upon request. The presence of Data Availability Statements was not associated with higher rates of data sharing (p = .80). Results replicate those found elsewhere: data is generally not available upon request, and promissory Data Availability Statements are typically not adhered to.
Additionally, mechanical solutions focused on reproducibility fails because of Brandolini's law:
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
The incentive structure is skewed too: faking data can bring fame and money, while exposing fraud offers little more than moral satisfaction. These incentives, coupled with the social ostracization that comes with criticizing the work of colleagues, have turned "peer review" into an ineffective ritual.
Is there a new technology, maybe the Blockchain,, that could solve these problems? I maintain that there is no mechanical solution to this human problem because no technology can force a scientist to be honest or diligent. Scientists who don’t want to do good science, never will, and can’t be forced to. Honest scientists have been doing great science before the blockchain or even academia existed.
No Quick Fix
A tongue-in-cheek way to summarize the above is: Science is an art, not a science. And the same is true for politics. In other words, there is no way to mechanize them without destroying them.
How to fix this problem is beyond the scope of this essay, but suffice it to say that academia as the church of our new “Science” religion cannot continue. More importantly, “Science” as a religion cannot continue.
“The Science” will never be able to tell us anything – only scientists will. Moreover, religious “Science” might be able to ignore the is/ought distinction and give instructions on what to do, but (small “s”) science is not.
We must recognize that there is no way to make policy questions scientific because “When scientists are allowed to dictate policy, politics does not become scientific, but science becomes politicized.”.
More specifically, even if we knew The Truth, many conflicting ways of moving forward would exist. No amount of “Science” can get rid of the difficult ethical decision that is inherent in choosing a course of action. To ignore this necessary debate on values is to allow a certain cohort—predominantly scientists and journalists—to impose their beliefs and values as the irrefutable standard. In other words, they get to shape and preach the religion of “Science,” destroying both religion and science in the process. And I, for one, think that both worked much better when they were separated.
Thanks to Allen Farrington for corrections and comments.