LAION-5B, a dataset used to train generative AI tools like StableDiffusion, was found to include thousands of examples of unsavory content. Although the dataset doesn't house the images, it does include metadata, like URLs and descriptions. According to the Stanford Internet Observatory, more than 1,000 of these records corresponded to child sexual abuse material.