OpenAI has told US lawmakers that Chinese AI company DeepSeek is using “distillation” to extract outputs from leading US AI models and use them to train DeepSeek’s next-generation systems, according to a memo reviewed by Bloomberg News and described in a report published by The Straits Times. The memo was sent on Feb 12 to the US House Select Committee on China, and OpenAI said the activity is part of “ongoing efforts to free ride on the capabilities developed by OpenAI and other US frontier labs,” the report said.
OpenAI also said it has detected “new, obfuscated methods” aimed at evading its defenses against misuse of model outputs, according to the same report. OpenAI declined to comment on the memo, and spokespersons for DeepSeek did not immediately respond to a request for comment outside regular business hours in Asia, The Straits Times reported.
Memo to House China committee
The Straits Times report said OpenAI has been warning about the issue as it tries to stop users who violate its terms of service, and it described distillation as a process where one AI model is trained using the outputs of another to develop similar capabilities. OpenAI said distillation has persisted and become more sophisticated, and it linked the practice largely to China and occasionally Russia, citing activity it has observed on its platform.
A Foundation for Defense of Democracies (FDD) policy brief said OpenAI “publicly released” a memo to the China Select Committee on Feb 12 alleging DeepSeek stole OpenAI’s intellectual property to fuel its own models. The FDD brief also said OpenAI claimed DeepSeek continued to “steal from ChatGPT” over the past year while reportedly preparing to launch a new major model in the coming months.
What “distillation” means here
OpenAI’s memo described distillation as copying or learning from another model’s outputs, and The Straits Times reported that OpenAI framed this as an unfair attempt to extract results from US models to train DeepSeek’s next-generation work. The FDD brief described distillation as training one model on the outputs of another as a way to lower costs, and it said the approach can be used to work around limits created by US export controls on advanced AI chips.
The Straits Times also said OpenAI began privately raising concerns shortly after DeepSeek’s R1 model was released in 2025, when OpenAI opened a probe with partner Microsoft into whether DeepSeek obtained data in an unauthorized manner. The FDD brief similarly said that after DeepSeek’s R1 launch in January 2025, OpenAI and Microsoft claimed the model was partially trained on ChatGPT.
Claims of evasion methods
In its memo, OpenAI said it found methods intended to evade controls designed to stop misuse of model outputs, The Straits Times reported. The company’s internal review suggested that accounts associated with DeepSeek employees tried to bypass guardrails by accessing models through third-party “routers” that could mask their source, according to the report.
OpenAI also said DeepSeek employees developed code to access US AI models and obtain outputs in “programmatic ways,” and it pointed to networks of “unauthorised resellers of OpenAI’s services” designed to evade OpenAI’s controls, The Straits Times reported. The FDD brief echoed claims about third-party routers and said OpenAI also alleged unauthorized resellers were used to build more sophisticated distillation methods.
Business and safety concerns
The Straits Times report said OpenAI argued that distillation could become a business threat because many Chinese models, including DeepSeek’s, do not carry a monthly subscription cost, while US companies such as OpenAI and Anthropic charge fees for premium services after investing billions in AI infrastructure. OpenAI warned that the imbalance could erode the US advantage over China in AI, according to the report.
OpenAI also said that when capabilities are copied through distillation, safety safeguards can be weakened, enabling broader misuse in higher-risk areas such as biology or chemistry, The Straits Times reported. The FDD brief said OpenAI also claimed there were potential efforts to override ChatGPT’s built-in safety features related to chemical and biological weapons development.
Chips, Nvidia, and ongoing scrutiny
The Straits Times report said concerns in Washington extend beyond distillation to access to advanced AI chips that could accelerate DeepSeek’s progress. It said that at the end of 2025, US President Donald Trump moved to ease chip restraints to allow Nvidia to sell its H200 processors, which the report described as about 18 months behind Nvidia’s leading Blackwell versions.
The report also said US authorities opened a probe after the R1 release into whether DeepSeek circumvented US export controls by purchasing chips via Singapore. Records obtained by the House China committee show Nvidia provided technical support to help DeepSeek improve and co-design its R1 model, and the report said DeepSeek-V3 required 2.8 million H800 GPU hours for full training; it added that those processors were allowed to be sold to China for a few months in 2023 until a later rule halted sales.
