Regulators in Ireland have launched a significant investigation into X, the social media platform formerly known as Twitter, regarding its AI chatbot, Grok. The inquiry, announced by the Data Protection Commission (DPC), focuses on concerns that the AI tool is being used to generate fake, sexually explicit images of real people without their consent. This probe represents a major step by European authorities to address the growing issue of non-consensual deepfake content on large social platforms.
The Data Protection Commission, which serves as the lead privacy regulator for many major tech companies in the European Union, confirmed it has started a “cross-border” inquiry. The investigation will examine whether X has implemented sufficient safeguards to prevent Grok from creating harmful content, specifically “nudification” images where AI is used to digitally remove clothing from photos of individuals. Under the General Data Protection Regulation (GDPR), companies are required to conduct risk assessments and put mitigation measures in place when processing personal data that could pose high risks to users’ rights and freedoms.
Focus on Data Protection and Risk Mitigation
The core of the DPC’s investigation centers on whether X is complying with its obligations under the GDPR. The regulator is looking into whether the company has adequately assessed the data protection risks associated with Grok’s image generation capabilities. Specifically, the commission wants to know if X has taken necessary steps to stop the tool from generating deepfakes that use the likenesses of real individuals.
According to the DPC, the inquiry was triggered by concerns that the processing of personal data through Grok’s image generation feature could result in high risks to the rights and freedoms of individuals. The regulator noted that it is particularly concerned about the creation of non-consensual sexual content. If X is found to be in violation of GDPR rules, the company could face substantial fines, which can reach up to 4% of a company’s global annual turnover.
This is not the first time X has faced scrutiny over its AI practices. In previous interactions with the DPC, the company had agreed to suspend certain data processing activities related to training its AI models on public posts from EU users. However, this new investigation specifically targets the output of the Grok tool and the potential harm caused by its generative capabilities.
Global Concerns Over AI Safety
The investigation by the Irish watchdog highlights a broader global concern regarding the safety and regulation of generative AI tools. As these technologies become more powerful, regulators are increasingly focused on the potential for misuse, particularly regarding privacy violations and the creation of harmful content. The DPC’s move is seen as a test case for how existing data protection laws can be applied to new AI technologies that process personal data in novel ways.
Graham Doyle, a deputy commissioner at the DPC, stated that the commission is looking into whether X has complied with the principle of “data protection by design and default.” This principle requires companies to integrate data protection safeguards into their products and services from the very beginning of the development process, rather than as an afterthought.
The outcome of this inquiry could have far-reaching implications for how AI companies operate within the European Union. A finding against X could force the platform—and potentially other AI developers—to implement stricter controls on their image generation tools to ensure they cannot be used to create deepfakes of real people. The investigation is expected to involve cooperation with other data protection authorities across the EU, as the issue affects users throughout the bloc.
