Anthropic’s CEO Dario Amodei is frightened about competitor DeepSeek, the Chinese language AI firm that took Silicon Valley by storm with its R1 mannequin. And his issues may very well be extra critical than the standard ones raised about DeepSeek sending person knowledge again to China.
In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei mentioned DeepSeek generated uncommon details about bioweapons in a security take a look at run by Anthropic.
DeepSeek’s efficiency was “the worst of principally any mannequin we’d ever examined,” Amodei claimed. “It had completely no blocks in any way towards producing this data.”
Amodei said that this was a part of evaluations Anthropic routinely runs on varied AI fashions to evaluate their potential nationwide safety dangers. His crew seems to be at whether or not fashions can generate bioweapons-related data that isn’t simply discovered on Google or textbooks. Anthropic positions itself because the AI foundational mannequin supplier that takes safety seriously.
Amodei mentioned he didn’t assume DeepSeek’s fashions at present are “actually harmful” in offering uncommon and harmful data, however that they is likely to be within the close to future. Though he praised DeepSeek’s crew as “proficient engineers,” he suggested the corporate to “take severely these AI security issues.”
Amodei has additionally supported robust export controls on chips to China, citing issues that they might give China’s navy an edge.
Amodei didn’t make clear within the ChinaTalk interview which DeepSeek mannequin Anthropic examined, nor did he give extra technical particulars about these exams. Anthropic didn’t instantly reply to a request for remark from TechCrunch. Neither did DeepSeek.
DeepSeek’s rise has sparked issues about its security elsewhere, too. For instance, Cisco safety researchers said last week that DeepSeek R1 failed to dam any dangerous prompts in its security exams, reaching a 100% jailbreak success charge.
Cisco didn’t point out bioweapons, however mentioned it was in a position to get DeepSeek to generate dangerous details about cybercrime and different unlawful actions. It’s value mentioning, although, that Meta’s Llama-3.1-405B and OpenAI’s GPT-4o additionally had excessive failure charges of 96% and 86%, respectively.
It stays to be seen whether or not security issues like these will make a critical dent in DeepSeek’s fast adoption. Corporations like AWS and Microsoft have publicly touted integrating R1 into their cloud platforms – paradoxically sufficient, provided that Amazon is Anthropic’s largest investor.
However, there’s a rising listing of nations, firms, and particularly authorities organizations just like the US Navy and the Pentagon which have began banning DeepSeek.
Time will inform if these efforts catch on or if DeepSeek’s international rise will proceed. Both approach, Amodei says he does think about DeepSeek a brand new competitor that’s on the extent of the U.S.’ high AI firms.
“The brand new truth right here is that there’s a brand new competitor,” he mentioned on ChinaTalk. “Within the large firms that may practice AI — Anthropic, OpenAI, Google, maybe Meta and xAI — now DeepSeek is possibly being added to that class.”