NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2023 / 12 / 20 / DISTURBING REVELATION: AI IMAGE-GENERATORS TRAINED ON EXPLICIT PHOTOS OF CHILDREN

Disturbing revelation: AI image-generators trained on explicit photos of children

18:33 20.12.2023

A new report has revealed that artificial intelligence (AI) image-generators contain hidden images of child sexual abuse. The report, released by the Stanford Internet Observatory, emphasizes the need for companies to address this harmful flaw in the technology they have built. It has been discovered that these images have also facilitated the production of realistic and explicit imagery of fake children, as well as the transformation of social media photos of fully clothed teenagers into explicit images. This alarming development has raised concerns among schools and law enforcement agencies worldwide.

Previously, researchers believed that unchecked AI tools produced abusive imagery of children by combining two separate buckets of online images - adult pornography and benign photos of children. However, the Stanford Internet Observatory found over 3,200 images of suspected child sexual abuse in the LAION database, which is used to train leading AI image-makers. LAION, which stands for Large-scale Artificial Intelligence Open Network, has temporarily removed its datasets in response to the report.

Although these illegal images make up only a fraction of LAION's vast index of approximately 5.8 billion images, the report suggests that they are likely influencing AI tools' ability to generate harmful outputs. Furthermore, the recurrence of images featuring real victims reinforces their prior abuse. Stanford Internet Observatory's chief technologist, David Thiel, who authored the report, explains that this problem is challenging to fix due to the competitive nature of the field, leading to generative AI projects being rushed to market without sufficient attention to security measures.

One of the main users of LAION, Stability AI, a London-based startup that develops Stable Diffusion text-to-image models, played a significant role in shaping the dataset's development. Although newer versions of Stable Diffusion have made it more challenging to create harmful content, an older version introduced last year, which Stability AI claims it did not release, remains embedded in other applications and tools and is reportedly the most popular model for generating explicit imagery.

Stability AI insists that it only hosts filtered versions of Stable Diffusion and has taken proactive steps to mitigate the risk of misuse. The Canadian Centre for Child Protection, which operates Canada's hotline for reporting online sexual exploitation, acknowledges that the older version of Stable Diffusion remains in the hands of many users, making it difficult to retract its influence.

LAION was created by Christoph Schuhmann, a German researcher and teacher, who aimed to democratize AI development by making a vast visual database publicly accessible. However, the source of much of LAION's data, Common Crawl, believes that LAION should have scanned and filtered the data it acquired before utilizing it. LAION claims to have developed rigorous filters to detect and remove illegal content, but the Stanford report suggests that consulting with child safety experts earlier in the process could have improved these filters.

Many text-to-image generators are derived from the LAION database, although it is not always clear which ones. OpenAI, the creator of DALL-E and ChatGPT, states that it does not use LAION and has fine-tuned its models to reject requests for sexual content involving minors. Google based its text-to-image Imagen model on a LAION dataset but chose not to make it public in 2022 after an audit revealed inappropriate content, including pornographic imagery, racist slurs, and harmful social stereotypes.

Given the difficulties in retroactively cleaning up the data, the Stanford Internet Observatory urges more drastic measures. They call on anyone who has built training sets from LAION-5B to delete them or collaborate with intermediaries to clean the material. Additionally, efforts should be made to eliminate the older version of Stable Diffusion from the internet, ensuring it remains only in the darkest corners.

/ Wednesday, 20 December 2023 /

themes:  Internet  ChatGPT  OpenAI  AI (Artificial intelligence)  Google

VIEWS: 208


20/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS