Every year, the American association NCMEC publishes a report on data from its CyberTipline platform, where digital companies (such as Meta, Google, TikTok, etc.) are legally required to report suspicious material or activity related to child sexual abuse material (CSAM).
Recent NCMEC data on child abuse and child pornography show a peak in the United States in 2023 of 36.2 million reports of online sexual exploitation, a record that declined in 2024 (20.5 million), and which the NCMEC association recalculates as 29.2 million, due to a change in the way data is aggregated: in fact, in 2024, the NCMEC changed its methodology: previously, it only counted reports (i.e. individual submissions from platforms), which often contained many duplicates (e.g. the same content shared on multiple platforms or reported multiple times). Now it distinguishes between ‘reports’ and ‘incidents’ (the unique case or content, after eliminating duplicates). As a result, the raw number of reports drops (from 36.2 to 20.5 million), but unique incidents remain very high (29.2 million): a sign that abuse has not decreased, but that the method of counting is more accurate.
Also for 2024, in the United Kingdom, the IWF detected record levels of web pages containing CSAM: the proportion of ‘self-produced’ content (often the result of online grooming/coercion) exceeds 90% of the content removed.
In Europe, Europol, through the IOCTA, has found that child sexual exploitation and abuse (CSEA) remains one of the most serious online threats: criminals are migrating to encrypted services and closed communities, making it more difficult to identify and remove content. In 2024, 62% of all web pages dedicated to child sexual abuse identified by the IWF originated in EU countries. This represents a 28% increase on the previous year.
In Italy, the 2024 report by the Postal Police highlighted numerous operations and hundreds of arrests for child pornography, with over 2,300 websites blocked. Meanwhile, the latest report by the Terre des Hommes Foundation, ‘Dossier Indifesa’, highlighted over 7,200 crimes against minors in 2024, with an increase in those related to digital technology (child pornography and possession of material). Finally, Ernesto Caffo, president of Telefono Azzurro, reported that ‘the numbers on digital abuse and violence against minors are rising sharply’: cases have increased by 380% through the use of AI to create images of abuse online.
The increase in these cases is therefore due to the fact that criminals are moving towards encrypted chats and closed groups (Telegram, WhatsApp, Signal) that hinder automatic detection and reduce traceability (as stated in the Europol report). This does not in itself prove an absolute increase in abuse, but it explains the migration of flows and the greater difficulty of investigation. Therefore, when Meta/WhatsApp and other platforms strengthen end-to-end encryption, automatic reports, for example to the NCMEC association, may decrease (less ‘reportable’ upstream), even if criminal activity does not necessarily decline. Encrypted platforms are therefore increasingly used to exchange CSAM and groom children, but encryption makes it more difficult to measure the actual incidence.
This year, the Italian association Meter presented the first dossier analysing how artificial intelligence is exploited to generate CSAM, alter images and encourage online grooming, manipulating minors. The paper shows that, from December 2024 to May 2025 (six months), approximately three thousand children in Italy were ‘stripped’ on the instant messaging app Signal by paedophiles and child pornographers operating online. Noteworthy is the phenomenon of deepnudes: people, in this case children and adolescents, who, unaware, can be depicted naked, in indecent poses, compromising situations (e.g., in bed with alleged lovers) or even in pornographic contexts. We wrote about it here.
Well, in the period 2024-2025, authorities and NGOs report a sharp increase in AI-generated images of minors, including practices such as ‘nudification’ of photos stolen from social media and sextortion, and coordinated operations lead to several arrests in 19 countries in 2025. The analysis by UNICRI 2024 also classifies AI-CSAM as an emerging threat. Also in 2024 Then there was the record number of pages with CSAM (IWF 2024): more than 90% of content was self-generated using AI, which makes it easier for criminals to create AI derivatives and threaten minors with extortion (“pay up or we’ll spread your images”).
In order to prevent these crimes, each of these countries has implemented various regulations. In Europe, temporary regulations (EU Reg. 2021/1232) are currently in force, extended until April 2026, which currently provide a clear legal basis for companies to proactively and voluntarily analyse child sexual abuse material on their platforms without the risk of being held liable for violating privacy rules. At the same time, in 2022, the EU proposed the ‘Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse (CSAM)’, now known as ‘Chat Control’. We wrote about it here. However, the EU could soon face a regulatory vacuum: the temporary regulation will expire in April and Chat Control is currently at a standstill (we wrote about it here).
However, there are several bottom-up solutions derived from different associations.
The IWF (Internet Watch Foundation), the largest British association dedicated to removing child sexual abuse material online, recently published (9 October) a new document illustrating how end-to-end encrypted (E2EE) messaging platforms can prevent the spread of known child sexual abuse images and videos without breaking encryption or violating user privacy. We discussed this in this article. In practice, when a user attempts to upload or send a file (image/video), the device generates a ‘hash’ (a kind of digital fingerprint of the file). The hash is compared against a secure list of hashes of known illegal content, managed by a trusted body such as the IWF or NCMEC. If there is no match, the file is encrypted and sent as normal. If there is a match, the upload is blocked immediately, before the content is encrypted or leaves the device. Therefore, the goal is to block the spread of known child pornography (not to search for new content or suspicious behaviour).
In Italy, the Meter association, like NCMEC and IWF, has been collecting private reports for thirty years and passing them on to the postal police. During this week’s conference at UPS entitled SefaGuarding Children, its director, Carlo Di Noto, stated that: ‘The first solution to prevent these cases of abuse is to educate parents, trainers, educators and professionals on different technologies and changes in the network’.
Va infatti sempre ricordato che secondo il GDPR l’uso di queste piattaforme è vietato in alcuni casi ai minori di 16 anni, in altri a quelli di 13, a seconda della piattaforma. E se, come ha ribadito Di Noto, “la fase di grooming può durare diversi mesi”, proprio i genitori devono essere informati e formati sull’altro uso, oscuro, della tecnologia e su chi potrebbe nascondersi dall’altra parte dello schermo. (foto di Onur Binay su Unsplash)
ALL RIGHTS RESERVED ©