The US Is Forming a Global AI Safety Network With Key Allies

The US is widely seen as the global leader in artificial intelligence, thanks to companies like OpenAI, Google, and Meta. But the US government says it needs help from other nations to manage the risks posed by AI technology.

At an international summit on AI Safety in Seoul on Tuesday, the US delivered a message from Secretary of Commerce Gina Raimondo announcing that a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks. She also urged other countries to join up.

“Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers,” Secretary Raimondo said in a statement released ahead of the announcement. “It is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

The US government has previously said advances in AI create national security risks, including the potential to automate or accelerate the development of bioweapons or to enable more damaging cyberattacks on critical infrastructure.

One challenge for the US, alluded to in Raimondo’s statement, is that some national governments may not be eager to fall in line with its approach to AI. She said the US, the UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a “global network of AI safety institutes.”

The Commerce Department declined to comment on whether China had been invited to join the new AI safety network. Fears that China will use advanced AI to empower its military or threaten the US led first the Trump administration and now the Biden administration to roll out a series of restrictions on Chinese access to key technology.

Amid rapid deployment of generative AI systems like ChatGPT last year, some prominent researchers and tech leaders began to speak more loudly about the potential for AI algorithms to become difficult to control and perhaps even a threat to humanity. Talk of the most far-off threats has since faded, but policymakers around the world are concerned about more immediate problems, such as the potential for generative AI tools like ChatGPT to spread disinformation and interfere with elections. In January, some voters in New Hampshire received robocalls using an AI-generated fake of Joe Biden’s voice.

Last October, President Biden issued a wide-ranging executive order to address the potential and pitfalls of fast-moving leaps in AI most evident with the startling abilities of ChatGPT. The Commerce Department was ordered to work on a number of initiatives to develop AI safety standards and also to develop a plan for global “engagement on promoting and developing AI standards.”

Biden’s executive order also required the US National Institute of Standards and Technology, which is part of the Commerce Department, to establish a US AI Safety Institute to systematically test AI models to understand how they could be misused and how they might behave.

Source : Wired