xAI’s promised security report is MIA | TechCrunch


Elon Musk’s AI firm, xAI, has missed a self-imposed deadline to publish a finalized AI security framework, as famous by watchdog group The Midas Challenge.

xAI isn’t precisely recognized for its sturdy commitments to AI security because it’s generally understood. A recent report discovered that the corporate’s AI chatbot, Grok, would undress photographs of ladies when requested. Grok will also be significantly extra crass than chatbots like Gemini and ChatGPT, cursing with out a lot restraint to talk of.

Nonetheless, in February on the AI Seoul Summit, a worldwide gathering of AI leaders and stakeholders, xAI printed a draft framework outlining the corporate’s method to AI security. The eight-page doc laid out xAI’s security priorities and philosophy, together with the corporate’s benchmarking protocols and AI mannequin deployment issues.

As The Midas Challenge famous in a weblog submit on Tuesday, nevertheless, the draft solely utilized to unspecified future AI fashions “not at present in growth.” Furthermore, it didn’t articulate how xAI would determine and implement threat mitigations, a core element of a doc the corporate signed at the AI Seoul Summit.

Within the draft, xAI stated that it deliberate to launch a revised model of its security coverage “inside three months” — by Could 10. The deadline got here and went with out acknowledgement on xAI’s official channels.

Regardless of Musk’s frequent warnings of the hazards of AI gone unchecked, xAI has a poor AI security observe document. A current examine by SaferAI, a nonprofit aiming to enhance the accountability of AI labs, discovered that xAI ranks poorly amongst its friends, owing to its “very weak” risk management practices.

That’s to not counsel different AI labs are faring dramatically higher. In current months, xAI rivals together with Google and OpenAI have rushed safety testing and have been gradual to publish mannequin security studies (or skipped publishing studies altogether). Some specialists have expressed concern that the seeming deprioritization of security efforts is coming at a time when AI is extra succesful — and thus doubtlessly harmful — than ever.

Leave a Reply

Your email address will not be published. Required fields are marked *