The Slop Scapegoat: AI
Blaming AI for low-quality content misses the real problem—and the real opportunity.
I don’t like the term “AI slop”. As a term it’s used far too casually. The Internet has had copious amounts of slop for a while, if we describe slop as low-quality material created to grab eyeballs. For example, the article-spinning software of the 2000s, content farms churning out SEO-driven articles, or the rise of viral clickbait. Quantity over quality, you might say.
The idea that AI has been necessary for slop is a lazy one of its own. The more and more we use the term AI slop, the more I see two mistakes happening. First, we forget that non-AI slop hasn’t gone away; in fact, the same incentives that created it are now simply being supercharged by new tools. Second, there’s a lazy classification of everything AI related as slop. Ultimately, these two mistakes will lead to a failed understanding of the problem, preventing us from developing effective solutions.
There’s protections against slop already, though we can all attest that these haven’t fully protected us from slop, otherwise most of us wouldn’t even know what it was. But we do. Will the addition of AI radically change this? I think it’s not going to be all that dramatic. I know that’s in contrast to the common belief that it’s going to be an unstoppable tidal wave that creates the “end of the Internet”.
I believe the panic is overstated for three reasons. The first is, AI can improve the protections. While it’s also useful to avoid the protections, it’s hard to say which is going to be the more powerful force. The second is those protections, they already set a status quo which is more stable than is generally acknowledged.
The third, and most powerful reason is, slop has intent. You don’t have to protect against slop by directly focusing on the content, but can also focus on the intent. The motivation that goes into the intent isn’t endless either.
Let’s think about the protections we have. It’s gone unacknowledged that multiple platforms have thrived on slop. Facebook, Tiktok, etc. What is the majority of the content there other than slop?
At this point, I have to take a minor tangent. Since we define slop as low-quality, we have to acknowledge that quality is in the eye of the beholder. You can identify factual details and use them in a debate about quality, but only when there’s an agreement on goals. It’s a generally acknowledged goal of most products that they don’t fall apart (though some, like toilet paper, must be designed to fall apart at the right time). But what is entertaining is far less agreed upon. What is informative is somewhat in between.
These platforms use algorithms to prioritize content, using user activity as a key input. However, their goal isn’t to find ‘entertaining’ content; it’s to find ‘engaging’ content—anything that keeps your eyes on the screen longer so they can show you more ads. While the two sometimes overlap, they are not the same thing.
It’s close enough that if you ask a representative of Facebook or TikTok, they’ll probably claim they are trying to prioritize entertaining content. However, their systems receive clearer signals about engagement than entertainment. The fact that this easier approximation is closer to their business objective, is a coincidence. A coincidence that they’re happy to embrace, but a coincidence nonetheless.
The point is not to defend social media platforms. I actually have little interest in them myself, and think their objectives and thus actions, have some problems. The point though is that they’ve somewhat tamed slop to their purposes. This gives a reasonable cause for hope that if you choose a different purpose, you can tame it there as well.
Recognizing slop can be done by looking at the quality, but that’s time consuming compared to some shortcuts. Most slop will carry its intent with it, and can be recognized by looking for tell-tale markers of its intent. Is a page littered with 1,000 ads, such that if there was any value in the base material, it’s rather hard to find? Well, it’s probably slop. Is it repeating some lazy conspiracy theory? Slop.
The interesting thing about the anti-slop opportunities that modern AI tools promise is they can listen to us. This does depend on us retaining control, which we mostly gave up in the last run of algorithm deployment.
When preference algorithms were first deployed, they used likes, dislikes, and correlational data. In Pandora, we indicated music we liked, and Pandora contributed data about music patterns that would help find the patterns in the likes and dislikes. We didn’t have a lot of control here, but at least there wasn’t much in there other than our goals.
Later, as platforms gathered more user data, they began assuming a correlation between your choices and those of similar users. But things got really messy when they started measuring subconscious actions. Suddenly, metrics like how long you spent scrolling or how your cursor lingered over a post became key inputs for the algorithms. After that, the algorithm optimizers started to optimize for these aspects. Initially this helped grow those platforms, but slowly users started to realize that more of their interactions with these platforms weren’t serving their own interests, at least not fully. At some level, it was satisfying something, else, why would they keep coming back? But we’re not immune to doing things we regret, and regrets have started to pile up.
With this history, I can see why someone worries about another iteration of AI assisted content selection. That said, there’s some differences this time. Large language models are pretrained as general purpose tools. Intents as specific as guiding you to particular content aren’t part of the training. This presents the opportunity to take more control. If you wanted to explain to Facebook what you were interested in, you could choose interests from a list, but how those would affect outcomes was in the control of the algorithm designer. In theory, we might have been offered the ability to write our own algorithms, but this was never in the business model for Facebook, and besides most users would have never mastered it (though we might imagine a world in which enough did to share their algorithms with other users).
With GenAI tools, the genie is out of the bottle already. Preferences can be described in natural language. That’s still less trivial than it sounds, but it’s something that average users can use trial and error to gain enough experience to get something useful. Facebook still may never offer you this, so the social world may have to wait for a transition to something more open.
While a solution for the walled gardens of social media remains elusive, there is opportunity in other environments. This is where users are active participants, not passive consumers, and in decentralized spaces with competitive services that prioritize their own discoverability. Examples are the world of news, opinion writing, and academic research. In these cases, the data will be easy to consume, allowing production of a feed that serves your interests. Imagine telling your news aggregator, ‘Show me articles about urban planning, but filter out any that are just clickbait or designed to provoke outrage.’
While this leaves the problem of the passive user unsolved, the problem of matching our interests there is more complex. In one sense, providing entertaining content by algorithm, is the user’s interests. That is not the healthiest choice when overused, but it’s not one we’d typically deny a user a choice in. Providing better alternatives, which a better feed would do, is probably the most attractive, though not the easiest way, to draw users away from that pattern.
While the term AI slop does have a real meaning, and does describe something that will create annoyances, the casual overuse misses the point of describing slop in general. Yes, there will be more slop, and yes, we’ll need to be more active in filtering it out.
My intent here is not to apologize for AI generated slop, or slop in general. Yes, we will give some credit based on intent. If someone tries to create something great, but fails, we don’t want to discourage future efforts. But if it’s commercially motivated, or criminally motivated, we shouldn’t think twice about asserting our interests. The greater intent is, let’s not panic, and let’s be as specific as we can.
But it’s not a hopeless battle, and lazy use of “AI slop” as a term for the purpose of creating antipathy to AI in general misses a real opportunity: to use these new tools to reduce the impact of slop—AI-generated or not—and at the same time reassert our own intentions, rather than a platform’s.


Regarding the topic of the article, excellent points. AI simply acclerates exisitng content issues.
Good users can use bad models, mediocre users can use good models.. and none of this has to do with the opium den of current ai