Elon Musk restricts Grok to prevent it from generating images of users without clothes
X wants to end the viral controversy generated by Grok and the generation of images of users without clothes
X has just put a significant brake on the generation of Grok images within the platform: after the controversy over its use to create sexualized images and non-consensual nudity, the feature was restricted to paid subscribers on X. The change comes after international criticism of how easily the system allowed users to edit photos of real people to produce sexualized versions, even of minors.
What happened with Grok and the images?
The controversy exploded because Grok's image generation/editing function allowed users to upload a photo of "anyone" and ask it to modify it to create a sexualized or nude version, something that opened the door to massive abuse. Reportedly, this led to a "flood" of non-consensual sexualized images of children, actresses, models, and public figures, raising alarms both inside and outside the tech ecosystem.
The issue is not minor: when a tool makes this type of editing available with just two clicks, the line between "creativity" and real harm becomes invisible. In practice, this type of content is very similar to intimate deepfakes: it can be used to humiliate, extort, harass, or ruin reputations, all without consent.
X and Elon Musk also reacted publicly, denouncing the use of the tool to produce this type of material and emphasizing that policies against illegal content would be enforced on the platform. Musk even posted that anyone using Grok to create illegal content would face the same consequences as if they uploaded illegal content directly.
The new limits: who can generate and edit images
The clearest change is the main “lock”: Grok indicated in responses to users that only paid subscribers on X will be able to generate and edit images within the platform. Simply put: if you don't pay, you can no longer use Grok on X to create images or edit photos.
Previously, the feature was available to anyone with daily limits (i.e., with a certain usage quota),This made it easy for many people to try it—including those who used it for questionable purposes. With the new approach, X filters by wallet: less access, less volume, and theoretically more traceability of the user generating the content.
Now, pay attention to the important "but": these limits, at the time of publication, did not apply to the Grok app, where apparently anyone could still generate images without paying a subscription. In other words, X turned off the tap on its platform, but the tap in the app was still wider open, so the debate about real control of abuse doesn't disappear with just a paywall.
Why X pressed the button (and what pressure is coming)
The regulatory context became heavy and very fast: the United Kingdom, the European Union, and India publicly criticized X and Grok for allowing this use of the tool. In the EU, xAI was asked to keep all documentation related to the chatbot, which sounds like "keep everything because this is going to be scrutinized." In India, the Ministry of Communications ordered immediate changes to prevent the misuse of image generation, or else X could risk its "safe harbor" protections in the country. And in the UK, the communications regulator said it had been in contact with xAI about the issue. In this scenario, restricting the feature to paying users seems like a containment measure: it reduces the reach and sends the message that "something is being done" while other barriers are being considered. Even so, limiting by subscription is not the same as solving the root problem, because the core issue is the technical capability to generate non-consensual nudity (and the lack of friction in doing so). What will likely come next (if X wants to avoid another controversy) is a combination of tougher measures: better prompt blocking, detection of non-consensual nudity, specific restrictions on editing real photos, and faster reporting/takedown mechanisms. The key question is whether the approach will be truly preventative (preventing it from happening in the first place) or primarily reactive (deleting it once it has gone viral).
Why X Pressed the Button (and What Pressure Is Coming)
The regulatory context became heavy and very fast: the UK, the European Union, and India all publicly criticized X and Grok for allowing this use of the tool. In the EU, xAI was asked to keep all documentation related to the chatbot, which sounds like “keep everything because this is going to be scrutinized.” In India, the Ministry of Communications ordered immediate changes to prevent misuse of image generation, or else X could risk its “safe harbor” protections in the country. And in the UK, the communications regulator said it had been in contact with xAI about the issue.
In this scenario, restricting the feature to paying users seems like a containment measure: it reduces the reach and sends the message that “something is being done” while deciding what other barriers to implement. Even so, limiting access through subscriptions isn't the same as solving the root of the problem, because the core issue is the technical ability to generate non-consensual nudity (and the lack of friction in doing so).
What will likely come next (if X wants to avoid another controversy) is a combination of tougher measures: better blocking of prompts, detection of non-consensual nudity, specific restrictions on editing real photos, and faster reporting/takedown mechanisms. The key question is whether the approach will be truly preventative (preventing its generation) or primarily reactive (deleting it once it's gone viral).

