Unfashionable

For some things Harvard suffices; this blog is for the rest.

To Fix The World, Fix Philadelphia

Effective Altruism is a growing subculture that advocates for

using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.

In practice, most Effective Altruists (EAs) follow the “earn to give” philosophy of amassing as much wealth as possible to give it away to the most impactful charities. A prominent example is Sam Bankman-Fried (the founder of the FTX crypto exchange) who not only gives a lot of money away but recently also launched the FTX Future Fund.

On the face of it, using “science” to figure out how to benefit others as much as possible seems to be a good idea almost by definition. Unfortunately, I believe that in practice the movement is misguided, leading to very ineffective altruism.

One particularly ineffective goal that Effective Altruists have focused on lately is getting more bright minds to work on AI safety and alignment (it is the first item on the FTX Future Fund area of interest page).

How EAs arrive at this goal from their vaguely utilitarian worldview is explained by Geoffrey Miller (a well-known EA) here:

More recently though, the hard core EA people are really focused on long-termism, not just sentientism. They’re much more concerned about existential risks to our species and civilization, than with improving a few charities and nudging a few policies around. We've realized you can do all the good you want for the next 10 generations, but if that doesn't lead to 100,000+ generations of sustainable, galaxy-spanning sentience, you're wasting your time.

The goal isn't to make the poor of Africa and Asia a little less miserable. The goal is a vigorous, meaningful, awesome interstellar civilization that lasts for a very long time. That's how you maximize aggregate, long-term, sentient value, happiness, and meaning. A billion generations of a trillion people each is a billion trillion sentient lives, and if you like life, and you have some objectivity, you should like the math.

They argue that if you care about the experiences of sentient organisms – for example, humans – and therefore want to improve those, your time is best spent improving the experience of future humans, which starts with making sure we do not go extinct. There are, after all, way more potential future humans than humans in the present. (The math checks out.)

I do not want to spend too much time on the usual arguments against “sentientism” but I do find the dismissal of those by the EAs a little strange. For example, as many have pointed out, sentientism leads, like Utilitarianism, to some very strange places if taken to its logical conclusion. The reductio ad absurdum of Utilitarianism is the “fentanyl economy”; the reductio ad absurdum of sentientism is that, among other things, you have to care about the experiences of bugs.

Some EAs argue for that type of bug empathy, but most do not. However, to get around this conclusion, you have to draw an arbitrary line somewhere between humans and bugs, which makes the theory less clear-cut. In other words, caring about bugs is not something you can just laugh away when you want to take sentientism seriously.

Furthermore, If you take “maximizing long-term total value, happiness, and meaning” seriously (even if only for humans), you must forever prioritize the experiences of potential future humans over those of the present. We are, in other words, always at the beginning of infinity. There are always infinitely more potential humans in the future you can choose to care about than humans in the present. This would mean that we should devote virtually all our resources to preventing our extinction; survival would be the only goal, forever.

Nick Szabo calls a very similar problem Pascal’s scam:

movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences.

As Szabo explains, not only are the probabilities of those rare events impossible to estimate, even the magnitude of the outcomes is uncertain. Furthermore, every argument against a fatal outcome is met with “Yes, but what if?” making a reasonable discussion impossible. For example:

“Here are my arguments for why even if we create an Artificial General Intelligence (AGI) it won’t be the end of the world.”

“Yes, but what if you are wrong? We need to care about the tail risk.”

Moreover, proponents of this worldview cannot come up with an experiment that would significantly reduce their estimate of the probability of the event happening or the magnitude of the outcome. Their belief is unfalsifiable. Not because we lack the technology for such an experiment or because we have not thought of it yet, but because these types of beliefs are unfalsifiable even in principle.

Now to my main criticism of Effective Altruism in general, and AI Safety research in particular, which is rather simple: I do not think it is the best use of smart people’s time – even for those that care almost exclusively about long-term survival.

Effective Altruists agree that the United States – the de facto capital of the free world – is failing on my fronts. I want to highlight two points specifically. First, more and more cities are becoming unlivable with major parts being occupied by homeless drug addicts. As an argument, if this simple observation still needs to be argued, I offer this video of a walk through Philadelphia.

Unfortunately, Philadelphia is not the only city where such misery prevails.

My second point is something EAs have noticed as well: our institutions are broken. We simply cannot make sense of the world anymore. The official experts from Havard whose opinions you read in The New York Times and listen to on CNN were wrong about almost everything when it comes to, say, Covid-19.

Worse, many Rationalists themself did a far better job explaining what was going on than the mainstream. That some nerds on the internet with blogs can do a better job of modeling the world than all our institutions is a frightening realization that shows us how dire the situation truly is.

Even worse again, Covid-19 was not an isolated case, but part of a trend where “experts” seem to get everything wrong, do not face any consequences, and then do it all over again – see the current thing (as I am writing this, the current thing is Ukraine).

Let me now state my counterproposal for how to be an effective altruist and improve the world: smart people should spend their time fixing Philadelphia. This sounds deceptively simple and like the exact opposite of the long-termism Miller advocates for, but I believe that it is the only effective form of long-termism.

It is not easy to fix Philadelphia (and other cities like it) as it requires fixing our institutions. The problem is not, as many EAs believe, solved by giving more money to charities. It is not a money problem. It is a problem of bad governance relying on broken institutions.

If you can address the situation without rebuilding our institutions, I am simply wrong and you have just improved the lives of many people before going back to AI safety research. But consider that people have been trying to solve this problem for decades, and I am convinced the situation has gotten progressively worse, not better.

When Miller says, “The goal isn't to make the poor of Africa and Asia a little less miserable.” I agree, but only because that’s too great of a challenge and we should first fix our problems at home before we even think about solving problems in other countries. I hope I do not need to remind anyone here of our miserable track record of “helping” other countries. “Set your house in perfect order before you criticize the world.” as Jordan Peterson would say.

But why would we need to reform our institutions? Well, the current policy on the matter is, like all modern public policy, “scientific” in that it relies on the opinions of the top scientists in the field. If their solutions do not work – worse, if they have gotten us into this mess – we have a problem money cannot fix.

To be clear, the scientists these politicians listen to might not be the “top scientists” in terms of ideas in any given field, but they are nonetheless at the top of their field as measured by citations, publicity, or mentions in “reputable” newspapers. That is, of course, the whole problem. If in any given field, the scientists that make it to the top are not the ones with the best ideas – the ideas that are closest to the truth – the academic field is broken. If, as I believe, this description applies to many academic fields, we have a big problem with incentives in academia. In other words, science is broken, and Philadelphia is a vivid example of what happens when an incompetent government relies on broken academia (the Covid-19 measures are another good example). That is why to fix Philadelphia, we need to fix our institutions.

The problem of fixing our institutions – media, academia, and government – is both more important and more difficult than EAs think. It is the most important problem if you care about the long term because you are improving the whole machinery; it has a compounding effect. Afterward, we can solve every problem easier, better, and faster.

Imagine academia where people were searching for the truth instead of starting a pandemic. A New York Times that informs people and clarifies issues instead of propagating the establishment line. Imagine our institutions actually working. Do you, dear reader, dare to dream?

There are a lot of smart people inside these institutions that could be working on improving the world (or first, Philadelphia) instead of doing whatever they are doing right now. That is why fixing the institutions is the best way to ensure the long-term survival of our species: it helps you solve future problems.

Moreover, after fixing these institutions, as with cleaning your room and setting your house in order, you might discover that some of what you thought to be problems were not problems with the outside world, but stains on your window that vanish once you have cleaned up.

Another big advantage of working on my problem over working on AI alignment is that it is concrete and progress is easily observable – anyone can just drive through Philadelphia. For AI alignment, on the other hand, nobody even knows what progress looks like.

Fixing Philadelphia also requires people to develop a model of how the world, especially our institutions and “politics,” actually work. This must entail the rather painful realization that donating $5 Million to the Democrats, as Sam Bankman-Fried did in 2020, is not going to help at all (nor is giving money to the Republicans). It is not only a waste of money but contributes to the problem rather than solving it.

But how do we fix the broken incentive structure of academia? How do we make science produces truths again? What are we to do about the NYT? Can the massive permanent bureaucracy we call the government even be reformed? These are all difficult questions (and I have some thoughts on them for another time), but at least now, you, dear reader, are asking the right questions.


Many thanks to Allen Farrington, whose numerous comments greatly improved this essay.