Microsoft’s shut companion and collaborator, OpenAI, is likely to be suggesting that DeepSeek stole its IP and violated its terms of service. However Microsoft nonetheless needs DeepSeek’s shiny new fashions on its cloud platform.
Microsoft as we speak introduced that R1, DeepSeek’s so-called reasoning mannequin, is out there on Azure AI Foundry service, Microsoft’s platform that brings collectively plenty of AI companies for enterprises below a single banner. In a blog post, Microsoft mentioned that the model of R1 on Azure AI Foundry has “undergone rigorous crimson teaming and security evaluations,” together with “automated assessments of mannequin conduct and intensive safety opinions to mitigate potential dangers.”
Within the close to future, Microsoft mentioned, clients will have the ability to use “distilled” flavors of R1 to run domestically on Copilot+ PCs, Microsoft’s model of Home windows {hardware} that meets sure AI readiness necessities.
“As we proceed increasing the mannequin catalog in Azure AI Foundry, we’re excited to see how builders and enterprises leverage […] R1 to sort out real-world challenges and ship transformative experiences,” continued Microsoft within the publish.
The addition of R1 to Microsoft’s cloud companies is a curious one, contemplating that Microsoft reportedly initiated a probe into DeepSeek’s potential abuse of its and OpenAI’s companies. In line with safety researchers working for Microsoft, DeepSeek might have exfiltrated a considerable amount of knowledge utilizing OpenAI’s API within the fall of 2024. Microsoft, which additionally occurs to be OpenAI’s largest shareholder, notified OpenAI of the suspicious exercise, per Bloomberg.
However R1 is the discuss of the city, and Microsoft might have been persuaded to convey it into its cloud fold whereas it nonetheless holds attract.
Unclear is whether or not Microsoft made any modifications to the mannequin to enhance its accuracy — and fight its censorship. According to a test by information-reliability group NewsGuard, R1 offers inaccurate solutions or non-answers 83% of the time when requested about news-related subjects. A separate test discovered that R1 refuses to reply 85% of prompts associated to China, probably a consequence of the government censorship to which AI models developed in the country are subject.