how to be an (actually) Effective Altruist
I'm finding this post a bit late, but I think it misses something crucial about why leaving one's home jurisdiction for the frontier is useful. When you're doing politics within incumbent institutions, you're stuck in the position of "trying to get management on your side." [https://scholars-stage.org/on-cultures-that-build/] You think that a very specific kind of drug legalization will end Philadephia's problems, but the mayor won't listen to you, so you're dead in the water printing up pamphlets for your cause and meanwhile you don't have a better theory of politics than anyone else.
Imagine you're a puritan in the 1600s. You see the Church of England turning against puritanism, under the direction of the King, and unless you're best friends with the King you get almost no say. So instead, you leave for New England and set up your own society. That society achieves near-universal literacy and founds what become the world's most esteemed universities, and you have a massive influence on not just the new society you've created in America but the soceity you left in Britain.
Creating a "model society" accomplishes a few things at once. For one, it lets you express your platform unfettered by the whims of incumbent authority. For another, you then learn how you need to change your theoretical beliefs to the realities you encounter, improving the ideology you hold. If you survive those stages and don't break down into cannibalism too often, then you become a success story which is living propoganda for creating the set of changes you want. (Also, if you fail, it's a decent sign that there is something seriously wrong with your ideas that needs fixing).
In the EA world, I think the clear example here is GiveDirectly. You can listen to Julia Galef's interview with Michael Faye to get a sense of the history of the organization, but they mostly give money directly to poor people in poor countries. They've recently begun creating similar programs in America, but there's good reason that they tried their ideas elsewhere for years first. Not only does going outside of America give you more mileage from your donation budget, you're also essentially providing a welfare state for people who don't have the concept, and so dealing with the problems that generates from day one. Faye in the podcast talks frankly about how they had to majorly adjust their concept to handle problems. But instead of these naive problems happening in e.g. Philadelphia, where you might have rival activist groups who point out every problem you have and foist media attention on you, and where you have competition from the state and an existing politics around poverty, working outside America allows you to iterate without entrenched political opposition and refine your program.
Anyway, I agree with your assessment that it would be good if we got institutions on track, but I think that's way more likely to happen if we have people successfully solving problems of a kind to Philadelphia's in less oppositional settings and using their results to inform Philadelphian policymaking than having more young people throw themselves at the wall of trying to create change as an urban nonprofit.
You know how when some outgroup describes your group's worldview, they usually don't understand it, and they describe some bizarre misunderstanding of it? Like the concerns of liberals as depicted on Fox News are different from the concerns of actual liberals, and vice versa with Daily Kos.
This post has that feel regarding effective altruism. I've been in EA circles for ten years, and the AI risk people don't say AI risk is a Pascal's Mugging low risk, and no-one talks about solving homelessness with money, and no-one thinks fixing institutions is unimportant or easy. Paul Graham's essay, How To Disagree, has the solution for this: the best kind of disagreement involves quoting the thing you're disagreeing with. It makes sure you're arguing with a real thing, and it shows the reader that you're arguing with a real thing. It makes your arguments falsifiable: http://www.paulgraham.com/disagree.html
> Furthermore, every argument against a [AGI-inflicted] fatal outcome is met with “Yes, but what if?” making a reasonable discussion impossible.
Can you link to examples of this? Maybe internet randos say this, but I doubt that there are any examples from people like Nick Bostrom or Stuart Russell or anyone from MIRI.
> proponents of this worldview cannot come up with an experiment that would significantly reduce their estimate of the probability of the event happening or the magnitude of the outcome. Their belief is unfalsifiable. Not because we lack the technology for such an experiment or because we have not thought of it yet, but because these types of beliefs are unfalsifiable even in principle.
The existence of a non-interventionist God is unfalsifiable in principle, and the notion of epiphenomenal consciousness is unfalsifiable in principle. That AGI is likely to be built within a century, and that AGI alignment is much more difficult than AGI, are very falsifiable in principle: wait a century and find out whether AGI was built, and whether you survived.
It's okay to predict things that current technology can't confirm immediately. The Higgs Boson was theorized in 1964, but it would have been incorrect to assume its probability was zero (or Pascal-level low) because you need decades and a $9 billion particle accelerator to confirm it.
I expect people would reduce their probability of the event happening if open problems in AI corrigibility were solved, or if breakthroughs were made in deep learning model explainability, or if someone can train a deep learning model to reliably filter its input to never describe injuring a human (Redwood Research's project a while back), or just if experiments show that human cognition takes more computation than previously estimated, which would at least push timeline estimates out further. (Admittedly, this might not decrease some people's probability estimates very much, because some of those things might be helpful but not close to sufficient.)