AI Image Training Database Found to Contain Child Abuse Images
Stanford study reveals over 1,000 explicit images in LAION's open-source database, raising concerns about the misuse of AI.
- Stanford University’s Cyber Policy Center has found over 1,000 illegal images depicting child sexual abuse in an open-source database used to train popular image-generation tools.
- The database, created by nonprofit LAION, contains more than 5 billion links to online images, which companies use as training data for their AI models.
- Experts blame a race to innovate and lack of accountability in the AI space for the presence of such explicit content in training data.
- LAION has temporarily taken down its datasets and will ensure they are safe before republishing them.
- Concerns are raised about the ability of AI to create explicit images of children, putting children at risk in the real world.