Anthropic appoints a nationwide safety professional to its governing belief | TechCrunch


A day after asserting new AI fashions designed for U.S. nationwide safety functions, Anthropic has appointed a nationwide safety professional, Richard Fontaine, to its long-term profit belief.

Anthropic’s long-term profit belief is a governance mechanism that Anthropic claims helps it promote security over revenue, and which has the ability to elect among the firm’s board of administrators. The belief’s different members embrace Centre for Efficient Altruism CEO Zachary Robinson, Clinton Well being Entry Initiative CEO Neil Buddy Shah, and Proof Motion President Kanika Bahl.

In an announcement, Anthropic CEO Dario Amodei mentioned that Fontaine’s hiring will “[strengthen] the belief’s capacity to information Anthropic by advanced selections” about AI because it pertains to safety.

“Richard’s experience comes at a essential time as superior AI capabilities more and more intersect with nationwide safety issues,” Amodei continued. “I’ve lengthy believed that making certain democratic nations keep management in accountable AI improvement is crucial for each international safety and the widespread good.”

Fontaine, who as a trustee gained’t have a monetary stake in Anthropic, beforehand served as a overseas coverage adviser to the late Sen. John McCain and was an adjunct professor at Georgetown educating safety research. For greater than six years, he led the Middle for A New American Safety, a nationwide safety assume tank based mostly in Washington, D.C., as its president.

Anthropic has more and more engaged U.S. nationwide safety prospects because it seems to be for brand spanking new sources of income. In November, the corporate teamed up with Palantir and AWS, the cloud computing division of Anthropic’s main associate and investor, Amazon, to promote Anthropic’s AI to protection prospects.

To be clear, Anthropic isn’t the one high AI lab going after protection contracts. OpenAI is seeking to determine a better relationship with the U.S. Protection Division, and Meta not too long ago revealed that it’s making its Llama fashions accessible to protection companions. In the meantime, Google is refining a model of its Gemini AI able to working inside labeled environments, and Cohere, which primarily builds AI merchandise for companies, can be collaborating with Palantir to deploy its AI fashions.

Fontaine’s hiring comes as Anthropic beefs up its government ranks. In Could, the corporate named Netflix co-founder Reed Hastings to its board.

Leave a Reply

Your email address will not be published. Required fields are marked *