Grok, the artificial intelligence (xAI) chatbot integrated into the social platform X (formerly Twitter), owned by Elon Musk, allegedly allowed the creation of deepfakes, in this case relating to nude images and non-consensual sexual content, using photographs of ordinary people or public figures.
Specifically, this would mainly concern ‘deepnude’ content involving minors. California Attorney General Rob Bonta has launched an investigation into allegations that the technology is being used to create non-consensual sexually explicit images of women and minors.
Bonta described the flood of reports received in early January from child protection organisations and international journalistic investigations as “shocking”, urging xAI to take immediate action. For his part, Elon Musk had stated a few hours earlier that he was unaware of any images of naked minors. Only after a wave of political and institutional criticism did X respond by limiting the image generation function to paying users. According to statements on the platform, the measure would improve traceability and discourage abuse, as paying subscribers would be more traceable than non-paying ones.
At the same time, regulatory authorities have taken various decisions. Ofcom, in the United Kingdom, has launched formal investigations in light of the Online Safety Act, while in some Asian countries, including Indonesia and Malaysia, access to Grok has been temporarily blocked or subject to severe restrictions. In early January, the European Commission had already announced that it would examine cases of sexually explicit images of young girls generated by Grok.
The causes of the creation and dissemination of such content can be traced back to the introduction of a paid feature known as ‘spicy mode’ last summer, which, in fact, would also have affected minors. “I can confirm that the Commission is also looking into this matter very seriously,” a Commission spokesperson said on Monday in Brussels.
At the beginning of January, the Italian Data Protection Authority issued a warning to users of artificial intelligence-based services such as Grok, ChatGPT and Clothoff, the latter already subject to a blocking order last October, and other similar services available online, which allow users to generate and share content based on real images or voices, even going so far as to ‘strip’ people without their consent.
Also yesterday, xAI stated that it will no longer allow photos to be edited to portray real people in revealing clothing in countries where this is illegal. According to xAI, Musk’s company, content is now geo-blocked if it violates local laws. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing, such as bikinis or lingerie,” the company said.
The phenomenon of deepnude via social networks and AI concerns, in the United States alone in 2024, approximately 30 million reports of suspicious material or activity related to the sexual exploitation of minors (source: NCMEC). We explored this topic in depth here. In Italy, on the other hand, in the period from December 2024 to May 2025 alone, there were approximately 3,000 reports on a single platform, Signal, of CSAM (source: METER), i.e. online child sexual abuse material, thanks to the exploitation of artificial intelligence.
This is how a technological tool has become a technology for large-scale abuse thanks to the echo of social media. And the legal issue is still open: is the responsibility of the developer, the platform that integrates the material through the algorithm, the algorithm itself, the end user, or the owner, who should ultimately be aware of such reports? In the United States, there is ongoing debate about laws that would allow victims of deepfakes to sue not only the authors, but also the platforms that facilitate their dissemination. In Europe, we have the Digital Services Act and the AI Act, which introduce direct liability for providers of artificial intelligence systems considered to be high risk. Not only that, but the much-debated Chat Control is still under discussion. We explored this in more detail here. The fact remains that the outcome of the Californian prosecutor’s investigation will be a turning point, at least for American legislation. (photo by Salvador Rios on Unsplash)
ALL RIGHTS RESERVED ©