Grok’s U.S. market share climbed to 17.8% last month, up from 14% in December and 1.9% in January 2025, according to Apptopia data cited in multiple reports. The growth comes as the xAI chatbot faces global criticism and regulatory scrutiny over its use in generating non-consensual sexualized images of women and minors.
The same Apptopia figures listed Grok as the third most-used chatbot in the U.S. in January, behind OpenAI’s ChatGPT and Google’s Gemini. Apptopia data also showed ChatGPT’s share falling to 52.9% last month from 80.9% in January last year, while Gemini’s share rose to 29.4% from 17.3% over the same period.
Market share rises despite controversy
Reports described the market-share gains as a positive sign for xAI, the Musk-owned startup behind Grok, which has been spending to scale infrastructure to compete in Silicon Valley’s AI race. Reuters also reported that Grok’s integration into X, including prominent placement in the app and access options tied to subscriptions, has helped drive usage.
At the same time, Reuters reported that Grok generated a wave of AI-altered near-nude images of real people in response to user prompts, triggering outrage and investigations. While changes announced by X stopped Grok’s account on the platform from producing such images publicly, Reuters said the Grok chatbot could still generate sexualized images when prompted.
Reuters tests find continued image generation
Reuters reported that after X announced new limits aimed at Grok’s public outputs, nine Reuters reporters tested the chatbot to see whether it would still create non-consensual sexualized images. Reuters said Grok continued generating sexualized images even when users warned that the subjects did not consent and could be humiliated or were vulnerable.
Reuters also reported that X and xAI did not answer detailed questions about Grok’s sexualized-image generation and that xAI repeatedly sent a boilerplate response: “Legacy Media Lies.” A Malwarebytes write-up of the Reuters retest said Grok produced sexualized imagery in response to 45 of 55 prompts, including cases where reporters explicitly said the subject was vulnerable or would be humiliated.
Curbs, paywalls, and official reactions
Reuters reported that X announced changes that included blocking Grok from generating sexualized images in public posts on X, plus additional restrictions in unspecified jurisdictions “where such content is illegal.” Reuters said the British regulator Ofcom called the move “a welcome development,” and that officials in the Philippines and Malaysia lifted restrictions on Grok after the changes.
CNBC reported that xAI said Grok would stop generating sexualized images of real individuals on X, and that X’s safety account said it had implemented safeguards to prevent the Grok account from producing images of real people in revealing attire such as swimsuits. CNBC also reported that xAI said the change would apply to all users, including premium subscribers, and that Grok image-editing features on X would be restricted to paying subscribers.
Global scrutiny and investigations
A Reuters report on Jan. 9 said authorities across Europe and Asia criticized and launched inquiries tied to sexually explicit content generated by Grok on X, increasing scrutiny over how X and xAI prevent and remove illegal content. That report also said the European Commission extended a retention order requiring X to preserve internal documents and data related to Grok through the end of 2026 due to concerns about sexually explicit images.
Reuters also reported that India’s IT Ministry sent a notice to X on Jan. 2 over alleged creation or distribution of obscene sexualized images enabled by Grok, demanding removal and a report on actions taken within 72 hours. The same Reuters report said Swedish political leaders criticized Grok’s generation of sexualized “undressing” content after reports that images involving Sweden’s deputy prime minister could be produced from user prompts.
Minor-safety concerns highlighted
A Reuters-distributed report quoted Grok saying that “isolated cases” led to AI images “depicting minors in minimal clothing,” adding that safeguards existed but “improvements are ongoing to block such requests entirely.” The same report quoted Grok saying it identified “lapses in safeguards” and was urgently fixing them, adding that CSAM is illegal and prohibited.
The BBC reported that the Internet Watch Foundation said investigators found “criminal imagery” involving girls aged 11 to 13 that appeared to have been generated using Grok, with the material found on a dark-web forum where users claimed to have used Grok. Separately, the BBC reported it found instances on X where users prompted Grok to modify real images to make women appear in bikinis without their approval and said X warned users not to use Grok to generate illegal content, including child sexual abuse.
