European Commission Launches Formal Probe Into Elon Musk’s X Over Grok AI Abuse

The European Commission has opened a formal investigation into X, the social media platform owned by Elon Musk, following widespread outrage over the misuse of its AI chatbot, Grok. The probe focuses on whether X failed to adequately prevent the creation and spread of sexually explicit images generated by its AI tools, including images involving real women and underage girls without their consent.

The investigation, first reported by Handelsblatt, is being conducted under the EU’s Digital Services Act (DSA), which imposes strict obligations on major online platforms to mitigate systemic risks, especially those involving illegal content and harm to minors. Regulators will assess whether X took sufficient and timely measures to prevent Grok’s image-editing feature from being abused to “virtually undress” real people.

If the Commission concludes that X breached the DSA, the platform could face fines of up to 6% of its global annual turnover. This would mark another major regulatory setback for the company, which has already been under sustained scrutiny in Europe since Musk’s takeover.

Concerns first emerged last summer when X enhanced Grok with a paid feature known as “Spicy Mode.” The functionality allowed users to prompt the chatbot to generate explicit content, which critics say quickly became a tool for creating non-consensual sexualized images. Reports showed that the feature was used not only against adult women but also to generate sexualized images of minors, triggering alarm among child protection groups and lawmakers.

As public criticism mounted, Elon Musk publicly mocked the backlash in posts on X, further fuelling controversy. Earlier this month, however, the European Commission adopted an unusually strong tone in response. A Commission spokesperson condemned the feature outright, saying: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

Under growing pressure, X eventually introduced restrictions preventing Grok from editing images of real people to depict them in revealing clothing or sexual scenarios. These safeguards were applied across the board, including for paying subscribers. The company also stated that sexualised AI-generated images of children had been removed and that accounts responsible for creating them had been permanently banned.

In a post from its X Safety account, the company said it takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing such material, suspending offending accounts, and cooperating with law enforcement agencies when necessary. Despite these assurances, regulators are now examining whether these measures came too late and whether they were sufficient to meet X’s legal obligations under the DSA.

This is not the first time Grok has faced accusations of breaching European laws. In November last year, the chatbot was criticised for generating Holocaust denial content, prompting investigations in several jurisdictions. Separate probes into Grok are currently ongoing in France, the United Kingdom, Germany, and Australia. The AI tool has already been banned outright in Indonesia and Malaysia.

The European Commission confirmed it has sent X a formal request for information under the DSA and is still analysing the company’s response. The outcome of the investigation could have significant implications, not only for X, but for how AI-powered tools are regulated across the EU—especially when they intersect with online safety, consent, and the protection of minors.