OpenAI on Monday launched a brand new household of fashions known as GPT-4.1. Sure, “4.1” — as if the corporate’s nomenclature wasn’t complicated sufficient already.
There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Obtainable by means of OpenAI’s API however not ChatGPT, the multimodal fashions have a 1-million-token context window, that means they’ll soak up roughly 750,000 phrases in a single go (longer than “Struggle and Peace”).
GPT-4.1 arrives as OpenAI rivals like Google and Anthropic ratchet up efforts to construct subtle programming fashions. Google’s not too long ago launched Gemini 2.5 Professional, which additionally has a 1-million-token context window, ranks extremely on well-liked coding benchmarks. So do Anthropic’s Claude 3.7 Sonnet and Chinese language AI startup DeepSeek’s upgraded V3.
It’s the aim of many tech giants, together with OpenAI, to coach AI coding fashions able to performing complicated software program engineering duties. OpenAI’s grand ambition is to create an “agentic software program engineer,” as CFO Sarah Friar put it throughout a tech summit in London final month. The corporate asserts its future fashions will have the ability to program whole apps end-to-end, dealing with elements resembling high quality assurance, bug testing, and documentation writing.
GPT-4.1 is a step on this route.
“We’ve optimized GPT-4.1 for real-world use based mostly on direct suggestions to enhance in areas that builders care most about: frontend coding, making fewer extraneous edits, following codecs reliably, adhering to response construction and ordering, constant software utilization, and extra,” an OpenAI spokesperson instructed TechCrunch through e mail. “These enhancements allow builders to construct brokers which can be significantly higher at real-world software program engineering duties.”
OpenAI claims the total GPT-4.1 mannequin outperforms its GPT-4o and GPT-4o mini fashions on coding benchmarks together with SWE-bench. GPT-4.1 mini and nano are mentioned to be extra environment friendly and quicker at the price of some accuracy, with OpenAI saying GPT-4.1 nano is its speediest — and least expensive — mannequin ever.
GPT-4.1 prices $2 per million enter tokens and $8 per million output tokens. GPT-4.1 mini is $0.40/M enter tokens and $1.60/M output tokens, and GPT-4.1 nano is $0.10/M enter tokens and $0.40/M output tokens.
In line with OpenAI’s inside testing, GPT-4.1, which may generate extra tokens directly than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. (OpenAI famous in a weblog submit that some options to SWE-bench Verified issues couldn’t run on its infrastructure, therefore the vary of scores.) These figures are slightly below the scores reported by Google and Anthropic for Gemini 2.5 Professional (63.8%) and Claude 3.7 Sonnet (62.3%), respectively, on the identical benchmark.
In a separate analysis, OpenAI probed GPT-4.1 utilizing Video-MME, which is designed to measure the power of a mannequin to “perceive” content material in movies. GPT-4.1 reached a chart-topping 72% accuracy on the “lengthy, no subtitles” video class, claims OpenAI.
Whereas GPT-4.1 scores fairly properly on benchmarks and has a more moderen “information cutoff,” giving it a greater body of reference for present occasions (as much as June 2024), it’s vital to take into account that even among the finest fashions as we speak battle with duties that wouldn’t journey up consultants. For instance, many studies have shown that code-generating fashions usually fail to repair, and even introduce, safety vulnerabilities and bugs.
OpenAI acknowledges, too, that GPT-4.1 turns into much less dependable (i.e. likelier to make errors) the extra enter tokens it has to cope with. On one of many firm’s personal exams, OpenAI-MRCR, the mannequin’s accuracy decreased from round 84% with 8,000 tokens to 50% with 1 million tokens. GPT-4.1 additionally tended to be extra “literal” than GPT-4o, says the corporate, typically necessitating extra particular, specific prompts.