Debates over AI benchmarking have reached Pokémon | TechCrunch


Not even Pokémon is secure from AI benchmarking controversy.

Final week, a post on X went viral, claiming that Google’s newest Gemini mannequin surpassed Anthropic’s flagship Claude mannequin within the authentic Pokémon online game trilogy. Reportedly, Gemini had reached Lavendar City in a developer’s Twitch stream; Claude was caught at Mount Moon as of late February.

However what the publish failed to say is that Gemini had a bonus: a minimap.

As users on Reddit identified, the developer who maintains the Gemini stream constructed a customized minimap that helps the mannequin determine “tiles” within the recreation like cuttable timber. This reduces the necessity for Gemini to investigate screenshots earlier than it makes gameplay choices.

Now, Pokémon is a semi-serious AI benchmark at greatest — few would argue it’s a really informative check of a mannequin’s capabilities. Nevertheless it is an instructive instance of how totally different implementations of a benchmark can affect the outcomes.

For instance, Anthropic reported two scores for its latest Anthropic 3.7 Sonnet mannequin on the benchmark SWE-bench Verified, which is designed to guage a mannequin’s coding talents. Claude 3.7 Sonnet achieved 62.3% accuracy on SWE-bench Verified, however 70.3% with a “customized scaffold” that Anthropic developed.

Extra lately, Meta fine-tuned a model of considered one of its newer fashions, Llama 4 Maverick, to carry out effectively on a specific benchmark, LM Enviornment. The vanilla model of the mannequin scores considerably worse on the identical analysis.

Provided that AI benchmarks — Pokémon included — are imperfect measures to start with, customized and non-standard implementations threaten to muddy the waters even additional. That’s to say, it doesn’t appear seemingly that it’ll get any simpler to match fashions as they’re launched.



Leave a Reply

Your email address will not be published. Required fields are marked *