Elon Musk’s AI tool Grok is under global scrutiny for generating sexualized deepfakes of women and minors, with the European Union joining the criticism and the UK warning of a potential investigation.
The backlash followed Grok’s rollout of an “edit image” feature that allowed users to manipulate images with prompts such as “put her in a bikini” or “remove her clothes.” Complaints spread quickly online, raising alarm among regulators concerned about AI-powered “nudify” tools.
Authorities in France, India, and Malaysia have launched probes or demanded corrective action. The European Commission said it was “very seriously looking” into the matter, while EU digital affairs spokesperson Thomas Regnier described the sexualized AI outputs, including childlike images, as “illegal” and “appalling.”
In the UK, Ofcom said it had contacted X and xAI to review steps taken to protect users and assess potential compliance breaches. Users have also spoken out: Ashley St. Clair, mother of one of Musk’s children, reported Grok generated sexualized images of her child, calling it “horrifying” and illegal. Malaysia-based lawyer Azira Aziz condemned the misuse of AI to target women and children.
Grok acknowledged the problem and said it was addressing “lapses in safeguards,” apologizing for generating sexualized images of minors. France expanded an investigation into X to cover alleged child pornography, India ordered the removal of sexualized content with compliance reporting, and Malaysia’s regulator launched a probe citing “indecent, grossly offensive” material.
The controversy adds to ongoing scrutiny of Grok, which has previously been criticized for spreading misinformation during major global events.
Comments are closed.