Breitbart
A recent investigation by Human Rights Watch (HRW) has uncovered a disturbing trend in AI development, where images of children are being used to train artificial intelligence models without consent, potentially exposing them to significant privacy and safety risks.
Ars Technica reports that Human Rights Watch researcher Hye Jung Han has discovered that popular AI datasets, such as LAION-5B, contain links to hundreds of photos of Australian children. These images, scraped from various online sources, are being used to train AI models without the knowledge or consent of the children or their families. The implications of this discovery are far-reaching and raise serious concerns about the privacy and safety of minors in the digital age.
Han’s investigation, which examined less than 0.0001 percent of the 5.85 billion images in the LAION-5B dataset, identified 190 photos of children from all of Australia’s states and territories. This sample size suggests that the actual number of affected children could be significantly higher. The dataset includes images spanning the entirety of childhood, making it possible for AI image generators to create realistic deepfakes of real Australian children.
READ THE FULL STORY