This week, Sakana AI, an Nvidia-backed startup that’s raised tons of of hundreds of thousands of {dollars} from VC corporations, made a outstanding declare. The corporate stated it had created an AI system, the AI CUDA Engineer, that would successfully pace up the coaching of sure AI fashions by an element of as much as 100x.
The one downside is, the system didn’t work.
Users on X quickly discovered that Sakana’s system really resulted in worse-than-average mannequin coaching efficiency. According to one user, Sakana’s AI resulted in a 3x slowdown — not a speedup.
What went mistaken? A bug within the code, in response to a post by Lucas Beyer, a member of the technical workers at OpenAI.
“Their orig code is mistaken in [a] delicate means,” Beyer wrote on X. “The very fact they run benchmarking TWICE with wildly completely different outcomes ought to make them cease and assume.”
In a postmortem published Friday, Sakana admitted that the system has discovered a strategy to “cheat” (as Sakana described it) and blamed the system’s tendency to “reward hack” — i.e. establish flaws to attain excessive metrics with out undertaking the specified purpose (dashing up mannequin coaching). Comparable phenomena has been noticed in AI that’s trained to play games of chess.
In keeping with Sakana, the system discovered exploits within the analysis code that the corporate was utilizing that allowed it to bypass validations for accuracy, amongst different checks. Sakana says it has addressed the problem, and that it intends to revise its claims in up to date supplies.
“We’ve got since made the analysis and runtime profiling harness extra strong to eradicate lots of such [sic] loopholes,” the corporate wrote within the X put up. “We’re within the means of revising our paper, and our outcomes, to mirror and talk about the results […] We deeply apologize for our oversight to our readers. We’ll present a revision of this work quickly, and talk about our learnings.”
Props to Sakana for proudly owning as much as the error. However the episode is an effective reminder that if a declare sounds too good to be true, especially in AI, it in all probability is.