Organizers of the International Conference on Machine Learning (ICML) have rejected 497 papers—roughly two percent of all submissions—ahead of their July 2026 event in Seoul, South Korea. The mass rejection occurred after officials discovered that submitting authors violated established rules by using artificial intelligence to conduct peer reviews of other researchers’ work. This enforcement action highlights growing tensions regarding the use of large language models in academic publishing.
As generative technology becomes more accessible, the ICML 2026 AI paper rejections serve as a clear warning to the global scientific community. To expose the illicit use of artificial intelligence in the peer-review process, conference staff embedded hidden watermarks into the digital research papers distributed to reviewers. When reviewers fed these manipulated documents into large language models, the hidden text instructed the artificial intelligence to output telltale phrases, trapping the rule-breakers and proving the evaluations were not written by humans.
How the Hidden Watermark Trap Worked
Under the ICML’s reciprocal review policy, individuals who submit research papers must also evaluate the work of their peers, barring specific exceptions. However, the conference explicitly bans using artificial intelligence to generate these mandatory evaluations. To enforce this policy, organizers set a trap by placing invisible prompts within the text of the distributed submissions.
When a reviewer illicitly used a large language model to read a watermarked paper, the hidden instructions commanded the artificial intelligence to include distinct, telltale phrases in its output. The presence of these specific phrases in the submitted reviews served as undeniable proof of AI generation. Through this sting operation, organizers identified 506 reviewers and flagged 795 suspect reviews.
The penalty for this violation was severe. Reviewers caught using artificial intelligence had their own research submissions immediately rejected from the conference. The academic community largely supported the crackdown, with many researchers applauding the conference’s proactive measures on the social media platform X. Organizers told the journal Nature that the move aims to enforce peer-review rules and prompt clearer policies in the future.
A History of Hidden Prompts in Academia
The ICML’s method of hiding text to manipulate AI behavior flips a tactic previously utilized by authors attempting to game the peer-review system. In July 2025, reports emerged of scientists concealing AI text prompts within their academic preprints to artificially secure positive peer reviews.
Authors embedded instructions using small, white text or extremely small fonts invisible to human readers but easily processed by machine-learning models. These hidden prompts commanded AI reviewing tools to ignore negative aspects of the research, generate favorable feedback, and explicitly recommend the manuscript for publication.
The scope of the 2025 prompt-hiding incident involved fourteen academic institutions from eight countries, including the United States, Japan, South Korea, China, and Singapore, with Columbia University explicitly named in reports. However, sources conflict on the exact number of compromised documents. According to a news report by Nikkei, 17 research papers contained the hidden text, while another academic source states that 18 papers were discovered by researchers examining articles that had yet to undergo peer review.
The Threat of Scholarslop
The academic struggle against AI-generated reviews mirrors a broader internet trend involving AI slop. Coined in the 2020s and selected as the 2025 Word of the Year by Merriam-Webster and the American Dialect Society, AI slop refers to digital content generated by artificial intelligence that lacks effort, quality, or meaning. It is typically produced in high volumes to gain an advantage in the attention economy.
In the academic sphere, this phenomenon has been dubbed scholarslop by researcher David Berry, referring specifically to AI-generated administrative discourse or quasi-academic texts that clutter the educational ecosystem. Experts argue that the publication of AI-generated articles in scholarly journals acts as an epistemic carcinogen that poses a major risk to the foundational knowledge ecosystem.
The influx of this material forces the scientific community to remain highly vigilant. Publications such as South Korea’s Donga Science—a monthly magazine founded in 1986 with a motto centered on the joy of science—regularly chronicle new discoveries. These discoveries rely entirely on the integrity of the peer-review process that the ICML is fighting to protect.
By turning the hidden-prompt tactic against reviewers, the ICML has demonstrated a novel approach to maintaining the integrity of scientific evaluation. As the academic and technological communities grapple with the influence of generative artificial intelligence, conferences are increasingly forced to implement advanced detection methods. The mass rejection of submissions at the ICML underscores the ongoing battle to ensure human expertise remains at the center of academic peer review.
