A study by researchers at the Stanford Internet Observatory has found that LAION-5B, one of the largest image datasets used to train AI systems like Stable Diffusion, contains thousands of instances of child sexual abuse material (CSAM).
The suspect CSAM images were identified through a combination of perceptual and cryptographic hash detection. The non-profit LAION (Large-scale Artificial Intelligence Open Network), which compi...
To continue reading this article...
Join Computing
- Unlimited access to real-time news, analysis and opinion from the technology industry
- Receive important and breaking news in our daily newsletter
- Be the first to hear about our events and awards programmes
- Join live member only interviews with IT leaders at the ‘IT Lounge’; your chance to ask your burning tech questions and have them answered
- Access to the Computing Delta hub providing market intelligence and research
- Receive our members-only newsletter with exclusive opinion pieces from senior IT Leaders