Artificial intelligence is no longer just a futuristic concept; it is actively reshaping how people consume information, often with alarming consequences. From social media feeds in New Zealand to elementary classrooms in South Korea, AI-generated content is blurring the line between reality and fabrication. Recent investigations and reports highlight a growing global crisis where synthetic media is eroding trust in journalism, distorting historical facts for students, and endangering children through digital exploitation.
The Rise of Automated “Slop” News
In New Zealand, the emergence of AI-generated “news” pages on social media has misled thousands of users. A specific investigation identified Facebook pages, such as the now-defunct “NZ News Hub,” that mimicked the branding of legitimate news organizations. These pages utilized artificial intelligence to scrape and rewrite genuine news stories, pairing them with unlabelled, synthetic images.
The content on these pages often dramatized real events to generate engagement. For instance, a tragic landslide in Mount Maunganui was depicted with fabricated images of crushed houses and cars that did not exist. In a particularly disturbing example, a photo of a minor killed in the disaster was manipulated to show her dancing. Another post altered an image of parents grieving their teenage daughter to make them appear affectionate. These posts, driven by automated scripts, sometimes even contained the raw AI prompts, such as instructions to “make this shorter, more dramatic, or social-media style.”
Experts warn that this flood of low-quality, automated content—often referred to as “slop”—is damaging public trust. With only 32% of New Zealanders reporting trust in the news, the inability to distinguish between verified reporting and AI fabrications is exacerbating the problem. Academic researchers note that while the underlying stories may be based on facts, the AI rewriting process introduces errors and the accompanying visuals are frequently pure fantasy.
Education and the Erosion of Critical Thinking
The impact of AI misinformation extends beyond social media and into the classroom. In South Korea, educators are reporting a significant decline in literacy and critical thinking skills as students increasingly rely on generative AI for assignments. A survey of over 900 teachers revealed that 82.5% believe excessive AI use is lowering students’ literacy, primarily because it encourages them to skip deep thinking processes in favor of immediate answers.
Specific incidents illustrate the severity of this issue. In one Seoul elementary school, a fifth-grade student presented a history report claiming that independence fighter Ahn Jung-geun paid condolences to Ito Hirobumi. The student had accepted a “hallucination” from a generative AI, which had conflated the independence fighter with his son. In another case, a student argued that plastic bags were eco-friendly based on an AI response that focused on manufacturing energy while ignoring the fact that plastic does not decompose.
The problem affects higher education as well. University students have been caught citing non-existent laws, such as a fictional “Special Act on the Prevention and Support of Lonely Deaths,” simply because an AI chatbot invented the title. Data from the OECD supports these concerns, showing that while students using AI may score higher on tests, their performance drops significantly—by 17% in one experiment—when the technology is removed, indicating a failure to internalize learning.
The Deepfake Threat to Children
While educational and informational risks are growing, the threat to personal safety is becoming acute. UNICEF has raised an alarm regarding the explosion of AI-generated sexualized images of children, a phenomenon known as “nudification.” A massive study conducted across 11 countries found that over 1.2 million children reported having their images altered into sexually explicit deepfakes in the past year alone. In some nations, this equates to one in every 25 children.
The United Nations agency emphasizes that this misuse of deepfake technology is a form of abuse with real-world harm. The availability of tools that can digitally “undress” individuals has escalated the risk, with social media platforms largely failing to stem the tide. High-profile platforms have faced criticism for allowing AI tools to be used to manipulate images of minors. UNICEF is now calling on governments worldwide to criminalize the creation and possession of AI-generated child sexual abuse material and urging tech companies to implement safety-first detection technologies.
Global Government Responses
In response to these escalating threats, governments are attempting to build defenses. The United Kingdom has launched a new framework in partnership with major technology firms and academic experts to strengthen the global fight against deepfakes. This initiative focuses on establishing consistent standards for detecting and labeling synthetic media.
By testing detection tools against real-world scenarios, the project aims to expose gaps in current defenses, particularly regarding fraud and impersonation. The push for international cooperation is evident, with recent detection challenges involving participants from the Five Eyes intelligence alliance and Interpol. Officials argue that a fragmented approach is insufficient against a technology that is becoming cheaper, faster, and more deceptive by the day.
