The federal government of Singapore launched a blueprint right now for international collaboration on synthetic intelligence security following a gathering of AI researchers from the US, China, and Europe. The doc lays out a shared imaginative and prescient for engaged on AI security via worldwide cooperation quite than competitors.
“Singapore is likely one of the few nations on the planet that will get alongside properly with each East and West,” says Max Tegmark, a scientist at MIT who helped convene the assembly of AI luminaries final month. “They know that they are not going to construct [artificial general intelligence] themselves—they’ll have it finished to them—so it is rather a lot of their pursuits to have the nations which can be going to construct it discuss to one another.”
The nations thought most certainly to construct AGI are, after all, the US and China—and but these nations appear extra intent on outmaneuvering one another than working collectively. In January, after Chinese language startup DeepSeek launched a cutting-edge mannequin, President Trump referred to as it “a wakeup name for our industries” and mentioned the US wanted to be “laser-focused on competing to win.”
The Singapore Consensus on World AI Security Analysis Priorities requires researchers to collaborate in three key areas: learning the dangers posed by frontier AI fashions, exploring safer methods to construct these fashions, and creating strategies for controlling the habits of essentially the most superior AI techniques.
The consensus was developed at a gathering held on April 26 alongside the Worldwide Convention on Studying Representations (ICLR), a premier AI occasion held in Singapore this 12 months.
Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI security occasion, as did teachers from establishments together with MIT, Stanford, Tsinghua, and the Chinese language Academy of Sciences. Consultants from AI security institutes within the US, UK, France, Canada, China, Japan and Korea additionally participated.
“In an period of geopolitical fragmentation, this complete synthesis of cutting-edge analysis on AI security is a promising signal that the worldwide neighborhood is coming along with a shared dedication to shaping a safer AI future,” Xue Lan, dean of Tsinghua College, mentioned in a press release.
The event of more and more succesful AI fashions, a few of which have shocking talents, has brought on researchers to fret a couple of vary of dangers. Whereas some concentrate on near-term harms together with issues attributable to biased AI techniques or the potential for criminals to harness the technology, a major quantity imagine that AI might pose an existential risk to humanity because it begins to outsmart people throughout extra domains. These researchers, generally known as “AI doomers,” fear that fashions might deceive and manipulate people so as to pursue their very own objectives.
The potential of AI has additionally stoked discuss of an arms race between the US, China, and different highly effective nations. The know-how is considered in coverage circles as crucial to financial prosperity and army dominance, and lots of governments have sought to stake out their very own visions and rules governing the way it ought to be developed.