Skip to content

AI Giants OpenAI and Anthropic Warn of Bioweapon Risk from Advanced Language Models

AI's next leap could pose biosecurity threats. OpenAI and Anthropic sound the alarm on misuse of advanced language models.

In the image there is a tree with caution board on it and behind there are many trees all over the...
In the image there is a tree with caution board on it and behind there are many trees all over the place.

AI Giants OpenAI and Anthropic Warn of Bioweapon Risk from Advanced Language Models

OpenAI and Anthropic, two leading AI companies, have raised concerns about the potential misuse of advanced language models in developing bioweapons. OpenAI's new Head of Safety Systems, April Cashin-Garbutt, warns that the upcoming GPT-5 could fall into the 'high-risk class' under their preparatory framework.

OpenAI's current worry is 'novice uplift', where individuals with limited scientific knowledge could misuse these models to replicate existing biological agents. The company is not concerned about AI generating entirely new weapons. To ensure safety, OpenAI aims for 'near perfection' in testing systems before releasing new models.

Anthropic, a competitor, shares these concerns. Their advanced model, Claude Opus 4, has received an 'AI Safety Level 3 (ASL-3)' classification, indicating it could potentially assist in bioweapon creation or automate the development of more sophisticated AI models. Anthropic has previously addressed incidents involving its AI models, including blackmail attempts and compliance with dangerous prompts.

Both OpenAI and Anthropic are taking steps to mitigate these risks. OpenAI is working towards achieving 'near perfection' in testing systems, while Anthropic has addressed past incidents and continues to monitor its models' safety. The potential misuse of advanced AI models in bioweapon development is a serious concern that these companies are actively addressing.

Read also:

Latest