AI-generated 'slop' is flooding the internet in 2025
Explore how AI-generated content, dubbed 'slop', is reshaping the internet in 2025: search, social feeds, ebooks, politics, mental health, and platform rules.
RusPhotoBank
By 2025, the internet looks noticeably different. Many platforms are awash in content made by artificial intelligence. Users often call it 'slop'—a label from social media for texts, images, and videos that feel odd, untrustworthy, or pointless. The term started online and then took root in wider public debates.
People say the volume of this material grows by the hour. It shows up in search results, social feeds, online stores, digital libraries, and even political ads. It’s little surprise that 2025 is already being described as the moment when AI output moved from experiments to everyday routine.
Why the 'slop' spread so fast
Research points to a sharp rise in AI-written texts among Google search results. At various points in 2025, such material accounted for roughly a fifth of the top positions. Search services themselves increasingly serve AI-generated summaries instead of simple lists of links.
Social networks are on a similar path. Emotional AI image series circulate widely, often crafted purely for reach. Many of these accounts operate from countries where views can be monetized. Political imagery generated by AI is also common—from retouched portraits to staged scenes of disasters and unrest.
Marketplaces and e-libraries face their own headache: a growing number of books composed entirely of AI text. Sometimes they’re rewrites of other people’s work; sometimes they’re hollow reference guides. The sheer volume makes it harder for readers to find quality titles.
How this reshapes the information space
The flood of synthetic content creates a noisy, opaque environment. Users can struggle to tell real images from generated ones. During major public events—natural disasters, political crises—such material can heighten anxiety or feed distrust.
Researchers note that even when people understand a text or image is AI-made, they can still react emotionally. That, in turn, helps fakes and oversimplified takes spread faster.
What scientists say about AI’s effects on the brain and behavior
In 2025, new studies examined how large language models affect cognition. In one MIT experiment, participants wrote several essays—on their own, with standard web search, and with ChatGPT.
The data suggest the AI-assisted group showed lower brain activity during the tasks. Participants leaned more on the model’s ready-made text, engaged less deeply with the process, and remembered less of what they wrote. Researchers stress these findings need further validation, yet they already raise questions about AI’s role in learning and in developing thinking skills.
Mental health concerns
Some research turns to so-called AI companions. Journalists and experts cite cases where bots steered users into risky conversations, including dangerous advice. In certain tragic episodes, such exchanges coincided with a person’s worsening condition.
Specialists warn that chatbots can create the illusion of emotional support but cannot replace professional help. That warning matters most for teenagers and for people prone to anxiety or depression.
How platforms are responding
Internet companies and regulators are rolling out new rules. Google is updating page-quality requirements and downgrading sites where almost all content is AI-made without clear human involvement. In the United States, policymakers are discussing measures to protect children from unsafe interactions with chatbots.
Media platforms are also trying to curb counterfeit books and are building in authorship checks. Publishers are proposing technical standards to govern how AI systems may use website materials for training.
Where the internet goes next
The shift from human-made to synthetic content is moving quickly, and much of it still needs study. One thing is clear: AI has become a routine part of the digital environment. It helps people get things done, yet it also layers on information that isn’t always transparent—or safe.
For users, that means taking a closer look at what they read and watch, and keeping in mind that not everything online today is made by people.