OpenAI says new models have increased risk of being misused to create bioweapons

OpenAI says new models have increased risk of being misused to create bioweapons

Unlock the Editor’s Digest for free

OpenAI’s latest artificial intelligence models have an increased risk of being misused to create biological weapons, as safety researchers raise the alarm about the potential dangers of the rapidly advancing technology.

The San Francisco-based company announced its new models, known as o1, on Thursday, touting their new capabilities to reason, solve hard maths problems and answer scientific research questions. These abilities are seen as a crucial breakthrough in the effort to create artificial general intelligence — machines with human-level cognition.

OpenAI’s system card, a tool to explain how the AI operates, said the new models have a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons.

The “medium” designation, the highest OpenAI has ever given for its models, means the technology has “meaningfully improved assistance that increases ability for existing experts in CBRN-related advanced fields to be able to create a known CBRN threat (eg, tacit knowledge, specific supplier information, plans for distribution)”, according to its policies.

AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, pose an increased risk of misuse in the hands of bad actors, according to experts.

These warnings come as tech companies including Google, Meta and Anthropic are racing to build and improve sophisticated AI systems, as they seek to create software that can act as “agents” that assist humans in completing tasks and navigating their lives. 

These AI agents are also seen as potential moneymakers for companies that so far are battling with the huge costs required to train and run new models.

That has led to efforts to better regulate AI companies. In California, a hotly debated bill — called SB 1047 — would require makers of the most costly models to take steps to minimise the risk their models were used to develop bioweapons.

Some venture capitalists and tech groups, including OpenAI, have warned that the proposed law could have a chilling effect on the AI industry. California governor Gavin Newsom must decide over the coming days whether to sign or veto the law.

“If OpenAI indeed crossed a ‘medium risk’ level for CBRN weapons as they report, this only reinforces the importance and urgency to adopt legislation like SB 1047 in order to protect the public,” said Yoshua Bengio, a professor of computer science at the University of Montreal and one of the world’s leading AI scientists.

He said that as “frontier” AI models advance towards AGI, the “risks will continue to increase if the proper guardrails are missing. The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”

Mira Murati, OpenAI’s chief technology officer, told the Financial Times that the company was being particularly “cautious” with how it was bringing o1 to the public, due to its advanced capabilities, although the product will be widely accessible via ChatGPT’s paid subscribers and to programmers via an API.

She added that the model had been tested by so-called red-teamers — experts in various scientific domains who have tried to break the model — to push its limits. Murati said the current models performed far better on overall safety metrics than its previous ones.

Additional reporting by George Hammond in San Francisco