OpenAI says that it deployed a brand new system to observe its newest AI reasoning fashions, o3 and o4-mini, for prompts associated to organic and chemical threats. The system goals to stop the fashions from providing recommendation that might instruct somebody on finishing up doubtlessly dangerous assaults, according to OpenAI’s safety report.
O3 and o4-mini symbolize a significant functionality improve over OpenAI’s earlier fashions, the corporate says, and thus pose new dangers within the fingers of dangerous actors. In line with OpenAI’s inner benchmarks, o3 is extra expert at answering questions round creating sure varieties of organic threats particularly. For that reason — and to mitigate different dangers — OpenAI created the brand new monitoring system, which the corporate describes as a “safety-focused reasoning monitor.”
The monitor, custom-trained to purpose about OpenAI’s content material insurance policies, runs on prime of o3 and o4-mini. It’s designed to determine prompts associated to organic and chemical threat and instruct the fashions to refuse to supply recommendation on these subjects.
To ascertain a baseline, OpenAI had crimson teamers spend round 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. Throughout a take a look at during which OpenAI simulated the “blocking logic” of its security monitor, the fashions declined to answer dangerous prompts 98.7% of the time, in line with OpenAI.
OpenAI acknowledges that its take a look at didn’t account for individuals who would possibly attempt new prompts after getting blocked by the monitor, which is why the corporate says it’ll proceed to rely partly on human monitoring.
O3 and o4-mini don’t cross OpenAI’s “excessive threat” threshold for biorisks, in line with the corporate. Nevertheless, in comparison with o1 and GPT-4, OpenAI says that early variations of o3 and o4-mini proved extra useful at answering questions round creating organic weapons.

The corporate is actively monitoring how its fashions may make it simpler for malicious customers to develop chemical and organic threats, in line with OpenAI’s lately up to date Preparedness Framework.
OpenAI is more and more counting on automated programs to mitigate the dangers from its fashions. For instance, to stop GPT-4o’s native image generator from creating child sexual abuse material (CSAM), OpenAI says it makes use of on a reasoning monitor just like the one the corporate deployed for o3 and o4-mini.
But a number of researchers have raised considerations OpenAI isn’t prioritizing security as a lot because it ought to. One of many firm’s red-teaming companions, Metr, stated it had comparatively little time to check o3 on a benchmark for misleading habits. In the meantime, OpenAI determined to not launch a security report for its GPT-4.1 mannequin, which launched earlier this week.