The Internet Condom, a Hopeful AI Future
The prevailing sentiment I’ve observed online is that AI-generated slop is flooding all previously reliable channels, rendering them nearly useless. This is most evident on social media, but lately it’s also been seen in things like scraping Github activity and subsequently spamming these email addresses. There are certainly good arguments that this trend will continue and that most open communication channels will disappear in favor of closed or private ones. In a way, the same thing happened when public spaces in the real world became unusable due to unrest and crime: people moved into private, gated communities.
I’d like to propose a different, more hopeful vision of how this might unfold: AI as an internet condom. In this vision, AI helps protect you from all the slop out there, whether it’s generated by AI or by humans—and there’s plenty of the latter, too. AI will, in a sense, sit between you and the internet. You’ll stop browsing the public, open internet because it’s simply become unpleasant—or perhaps it’s no longer even meant to be browsed by humans. Instead, your AI will filter everything for you. Imagine the Twitter/X timeline, but, e.g., without politics or ads. Whatever you want, the AI will search and filter the open internet to deliver it to you. There will still be closed communities that you can browse manually, but that will be the exception.
So the question is: Who owns this AI? Does it belong to you, or to a company that won’t let you turn off ads? When I look at how these models are developing, I’m hopeful that users will be able to control their own AI. That doesn’t even necessarily mean you have to run it on your own hardware—though that would obviously be the best-case scenario—but simply using open-source models in the cloud could give you enough control over it.
This will not be limited to social media. It can also serve as as a bullshit defense for other failing channels and institutions, such as peer review and academia. Regular readers of this blog know that I believe academia is in the midst of an epistemic crisis from which there is no easy way out. The incentives are simply not designed to produce high-quality research. Peer review as a system has failed and is not scalable in the face of the flood of AI-generated papers. AI review, on the other hand, is scalable. As I mentioned before, I’m confident that AI review can improve our epistemic situation at both the individual and perhaps even the institutional level. With the right prompts, even o3 could confidently identify obvious bullshit studies posted by people like Bryan Johnson or Dr. Rhonda Patrick. I consider this a very hopeful development, since without AI we are at the mercy of Brandolini's law.
The use of ai as a first line of defense against bullshit could be a way, at least for some fields, to improve the quality of articles published in their journals. Unfortunately, this presupposes that journals and reviewers are actually interested in improving this quality, rather than, for example, networking with the right people, engaging in academic politics, and gaining status—so I wouldn’t hold my breath. It's probably better to just let your AI filter the journal before reading it.
AI as a bullshit filter and as an aggregator is the use case that interests me the most right now. As I previously announced, I’ve developed a service that generates a news report (and now also an investor brief) every morning, taking into account all available sources, seeking out different perspectives, conducting research, and so on. It’s called Vaultara (general news report) and Vaultara Market (investor brief). Check it out if you’re interested—it’s free!