A massive wave of low-quality, automated videos known as AI slop is overwhelming YouTube, increasingly targeting young children. These mass-produced videos are generated in seconds by algorithms designed to keep kids watching. While the content features bright colors and catchy songs, investigations reveal a disturbing trend of dangerous messaging, made-up facts, and surreal imagery blurring the line between reality and fiction.
A November 2025 report from the video-editing company Kapwing estimated that 21 percent of the YouTube feed now consists of low-quality AI content. Furthermore, a New York Times investigation found that when creating a fresh account for young children, nearly half of the recommended videos featured AI visuals. This automated content shares the same algorithm as trusted shows like Sesame Street, making it difficult to tell the difference.
Hallucinations and Dangerous Lessons
Unlike traditional television, which requires human writers and editors, AI removes human judgment entirely. A single user can flood the platform with thousands of videos without anyone reviewing the educational value. For example, a channel called Jo Jo Funland published over 10,000 videos in just seven months—averaging 50 new videos per day. In contrast, Sesame Street has published roughly 3,900 videos in its entire 20-year history on the platform.
Reporting from The 74 and Mother Jones highlighted numerous examples of AI slop peddling hazardous behavior. In a video titled “Vroom Vroom! Car Ride Song,” children ride in the front seat without seatbelts, sit on the hood of a moving vehicle, and walk in the middle of a busy road. The song even features the lyrics, “Red means stop, and green means right.”
Carla Engelbrecht, a children’s media expert who previously worked for PBS Kids, uncovered even more horrifying examples. She found child-targeted AI videos showing a baby swallowing whole grapes—a choking hazard—and eating honey, which carries a fatal risk of infant botulism. Other videos depicted a teacher eating toxic raw elderberries and a scared child chased by a T-Rex.
Even academic videos frequently contain blatant AI hallucinations. One sing-a-long video meant to teach the 50 United States featured garbled text with fabricated names like Ribio Island, Conmecticut, Oklolodia, and Louggisslia. Another educational video about vowels showed visuals of consonants, while a video about the seven continents displayed a compass with indecipherable symbols instead of north, south, east, and west.
The Risk of “Brain Stunt” in Child Development
Experts are sounding the alarm on the developmental impact. Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University, described the situation as a monster problem. Children are developmentally vulnerable to AI because they suffer from authority bias and cannot yet distinguish between a real person and a confident-sounding machine.
Dr. Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, called the phenomenon toddler AI misinformation at an industrial scale. While older users might experience “brainrot,” Suskind warned young children face a “brain stunt.” Because a child’s brain is actively building neural connections, consuming mixed signals can unintentionally wire their brains incorrectly. Learning inconsistent facts forces their executive function offline to process nonsense, delaying overall learning.
YouTube’s Response to the AI Slop Crisis
The pervasiveness of this media has pushed YouTube to respond. Noting that Merriam-Webster named “slop” its 2025 Word of the Year, YouTube outlined 2026 priorities that include taking a harder stance against low-quality, repetitive AI content. While encouraging creators to use AI tools, the platform is actively building on systems to combat spam and clickbait.
YouTube currently requires creators to disclose realistic altered or synthetic content. However, this rule does not apply to the cartoonish animation style that makes up most kids’ AI slop. YouTube spokesperson Boot Bullwinkle noted the platform maintains stricter quality principles for family content. Following recent reporting, YouTube took action against at least seven channels, terminating two.
How to Clean Up Your YouTube Feed
Because AI slop optimizes for engagement, passive viewing trains the algorithm to serve similar material. To protect your feed, you can actively eliminate unwanted content by selecting “Not interested” or “Don’t recommend channel” on videos. You can also pause your watch history entirely. For a complete reset, deleting your watch and search history will wipe the algorithm clean, offering a fresh start to curate safer content.
