DHS Tests AI Tool to Combat AI-Generated Child Abuse Images
The Department of Homeland Security's Cyber Crimes Center is trialing a new AI tool to differentiate between AI-generated and genuine child sexual abuse material (CSAM). The three-month experiment aims to assist investigators in prioritizing cases involving real victims.
Hive AI, a company specializing in content moderation, has been awarded a $150,000 contract to develop this distinction tool. The company's software can identify AI-generated content, including CSAM images, and has previously provided deepfake detection technology to the US military.
The rise in generative AI has led to an increase in AI-generated CSAM, posing challenges for investigators. The new tool aims to help them focus on cases with real victims by identifying AI-generated images, streamlining the investigative process.
The Cyber Crimes Center will use this AI detector during a three-month trial. If successful, it could significantly enhance the center's ability to tackle genuine CSAM cases, potentially saving valuable time and resources.