The Guardian reported on December 27, 2025 that a Kapwing study found over 20% of videos recommended to a brand‑new YouTube account are “AI slop” – low‑quality AI‑generated clips designed to farm views. Kapwing analyzed 15,000 top YouTube channels and estimated these AI‑only channels have 63 billion views, 221 million subscribers and roughly $117 million in annual revenue.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This study is a vivid snapshot of how generative AI is already reshaping the information environment long before we reach anything like AGI. If one in five videos shown to a new YouTube user is cheap, synthetic “slop,” then the marginal cost of content has effectively collapsed — and recommendation algorithms are responding exactly as designed, optimizing for engagement over quality. That dynamic creates a powerful economic incentive to keep scaling automated content engines, even if they add virtually no informational value.([theguardian.com](https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds))
For the race to AGI, the implications are less about raw model capability and more about downstream impact. As low‑quality AI output saturates platforms, pressure will grow for better provenance, ranking and authenticity signals. That, in turn, nudges platforms and regulators toward more structured interfaces with AI systems: standardized metadata, audit trails, maybe even content caps for fully automated channels. Paradoxically, the flood of slop could accelerate investment in “higher‑tier” reasoning models that promise trustworthy summarization and filtering — because overwhelmed users will need something to stand between them and the noise.
The bigger strategic takeaway is that whoever controls recommendation and distribution surfaces has enormous leverage over how generative AI is actually experienced. That’s a different kind of moat than model weights, and it will matter as agentic systems start navigating these feeds on our behalf.


