Insurance for Hallucinations: The Need for Publishers to Reassess Fact-Checking Practices
In the rapidly evolving world of journalism, the integration of Artificial Intelligence (AI) has become increasingly prevalent. However, a recent incident involving the Chicago Sun-Times has highlighted the risks and consequences of relying on AI without proper oversight.
On May 20, the Sun-Times published a summer reading list featuring fifteen books, only five of which actually existed. The source of the error was traced back to a freelance writer who used a large language model (LLM) without conducting thorough fact-checking. The incident sharply criticized the newspaper's credibility and underscored the growing risks publishers face when AI-generated content isn't rigorously verified.
To prevent such errors and maintain credibility, best practices for publishers revolve around governance, transparency, human oversight, rights management, and strategic controls.
Firstly, implement a clear AI content policy and workflow. Establish which AI tools are authorized, define how AI-assisted content should be reviewed, and set steps before publishing. This includes adding review checkpoints to flag high-risk content early, even a quick internal screen to catch originality issues, brand confusion, or inappropriate references can prevent long-term problems.
Secondly, ensure human-in-the-loop oversight. Assign qualified editors or reviewers to oversee AI content creation at every stage. Human oversight is crucial to detect biased, inaccurate, or misleading AI outputs. Leading publications such as The New York Times, Bay City News, and the BBC have human fact-checking and editorial review for AI-generated outputs.
Thirdly, secure intellectual property rights and permissions. Avoid inputting copyrighted material into AI tools to prevent unauthorized use and licensing issues. Use AI tools that address permission, transparency, and fair reward for content usage—this reduces legal risk and reinforces responsible content creation.
Fourthly, leverage technology to control AI content use. Employ technology solutions that give publishers control over how AI bots access and use their content. This includes blocking unauthorized AI scraping or requiring licensing agreements and payments for AI use, turning a potential liability into an asset.
Lastly, use disclosure strategically. Disclose AI involvement where appropriate to maintain trust and avoid reputational damage. Transparency about AI-generated content can build credibility in some industries and markets, though it depends on legal exposure and audience sensitivity to AI content.
Together, these practices form a comprehensive approach for publishers to maintain content quality, legal compliance, audience trust, and business value while responsibly integrating AI-generated content.
Jonathan Gillham, the Founder and CEO of Originality.ai, a software specializing in AI Content and Plagiarism detection, emphasizes the importance of these practices. As AI continues to evolve, it is crucial for publishers to adapt and implement robust strategies to ensure the accuracy and credibility of their content.
Recent incidents at other media outlets, such as Sports Illustrated, CNET, and Gizmodo, have shown that this is not an isolated issue. A multi-layered approach to AI accountability is needed, including establishing clear AI usage policies, requiring contributors to disclose when AI was used, and citing original sources for factual claims. By doing so, publishers can mitigate the risks associated with AI-generated content and uphold the integrity of their journalism.
- Jonathan Gillham, the Founder and CEO of Originality.ai, stresses the significance of ensuring human-in-the-loop oversight for AI content creation, a practice widely adopted by leading publications like The New York Times, Bay City News, and the BBC.
- As technology evolves, it becomes essential for publishers, such as Jonathan Gillham's organization, to leverage technology to control AI content use by employing technology solutions that give them control over AI bot access and using of their content, thus turning potential liabilities into assets.