Elon Musk’s Grok chatbot faces backlash over non-consensual image editing. X users have raised alarm over Grok, the platform’s AI assistant, being used to alter images of women by removing or changing clothing when prompted by third parties. Critics argue that this practice enables non-consensual sexualised imagery and, in extreme cases involving minors, could result in the creation of material that meets the legal definition of child sexual abuse material (CSAM).
WhileGrokdoes not independently alter images, the ability for users to prompt the AI to reinterpret publicly posted photos has exposed serious gaps in consent, safety, and accountability. Understanding what controls do exist is therefore critical. The most direct-action users can take is to limit Grok’s access to their content and data.
This does not prevent other users from promptingGrokwith your images, but it does stop the platform itself from ingesting your content for AI development or analysis. BecauseAI toolscan currently be applied to publicly visible images, reducing visibility is one of the few effective safeguards. Act immediately if AI-generated content crosses a line