Wikipedia has officially prohibited the use of AI-generated text in its articles, citing repeated violations of the platform’s core content policies. The decision marks one of the most significant policy shifts for the world’s largest online encyclopedia in recent years.
The new policy states clearly that “the use of LLMs to generate or rewrite article content is prohibited.” This updated language replaces earlier, more ambiguous wording that simply advised editors not to use large language models to create new Wikipedia articles from scratch. The tighter rule closes loopholes that had allowed AI-generated content to quietly slip into existing articles through rewrites and edits.
A Community-Driven Decision
The policy change wasn’t handed down by a small committee — it was put to a vote by Wikipedia’s volunteer editor community and passed with overwhelming support. Editors approved the new rule by a margin of 40 to 2, reflecting widespread frustration over a months-long surge of AI-written content on the platform.
This vote came alongside the earlier formation of WikiProject AI Cleanup, a volunteer-led initiative specifically created to identify and remove AI-generated text already embedded in Wikipedia articles. The project underscores just how serious the contamination problem had become before any formal ban was in place.
Why Wikipedia Says AI Text Is a Problem
Wikipedia’s official statement explains that text produced by large language models frequently falls short of several core content policies the platform relies on to maintain accuracy and neutrality. AI models are well-known for a phenomenon called “hallucination” — generating confident-sounding but factually incorrect or entirely made-up information.
For an encyclopedia that millions of people consult daily for reliable facts, that risk is unacceptable. Editors had long argued that allowing AI-generated content into articles could erode Wikipedia’s credibility, a concern that gained urgency as AI tools became easier to use and harder to detect.
The new policy also warns that even well-intentioned use of AI for minor edits carries risks. “LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” the policy states.
What AI Can Still Be Used For
Despite the strict prohibition on AI-generated article content, Wikipedia has not banned AI entirely from its editorial workflow. The new rules carve out two narrow exceptions where AI assistance remains permitted.
First, editors may use large language models to suggest basic copyedits to their own writing — but only after human review, and only if the AI does not introduce any new content of its own. In other words, using AI to catch a grammar error is acceptable; using it to rephrase a paragraph or add new information is not.
Second, AI-assisted translation of articles from other language versions of Wikipedia is still allowed, provided editors follow the guidance that was already in place for translation work.
These exceptions reflect a practical middle ground: AI as a tool for polish, not a substitute for human knowledge and judgment.
Concerns About False Accusations
Wikipedia’s updated policy also addresses a tricky side effect of the crackdown — the risk of wrongly accusing human editors of using AI. The guidelines acknowledge that some editors naturally write in styles that may resemble AI-generated text.
To protect editors from unfair sanctions, the policy states that writing style alone is not enough evidence to justify a penalty. Any case against an editor must be backed by stronger evidence, including an assessment of whether the text actually violates core content policies and a review of the editor’s recent editing history.
A Pattern of Resistance to AI
This ban is part of a broader pattern of pushback from Wikipedia’s community against AI integration. In June of last year, the platform halted a feature that placed AI-generated article summaries at the top of pages, labeled “unverified.” Editors criticized the feature almost immediately, arguing it posed a direct threat to the site’s reputation for accuracy.
Together, these decisions send a clear message: Wikipedia’s volunteer-driven model is built on human expertise and verifiable sourcing, and the community intends to keep it that way — even as AI tools become increasingly powerful and widespread across the internet.
