Hundreds of the world’s top artificial intelligence scientists and tech executives have unified to raise the alarm over the risk AI poses to humanity.
In a one-sentence letter, the statement released by the San Francisco-based non-profit Center for AI Safety said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The Center for AI Safety has been working to convince major industry players to come out publicly with their concerns that artificial intelligence is a potential existential threat to humanity. The open letter was signed by more than 350 scientists, engineers and executives, including chatbot ChatGPT creator OpenAI’s CEO Sam Altman and Demis Hassabis, CEO of Google DeepMind artificial intelligence Unit, which has developed BARD.
The so-called godfather of AI, Geoffrey Hinton, who left Google this year so he could publicly voice his concern, and Yi Zeng, director of the AI lab at the Institute of Automation, Chinese Academy of Science, also added their names to the letter.
The signatories are among the major players who have been pushing new “generative” AI models to the masses, such as image generators and chatbots that can have humanlike conversations, summarize text and write computer code.
But not everyone agrees.
Professor Geoff Webb, Department of Data Science & AI, Faculty of Information Technology, says: “Extraordinary recent advances in AI have led to alarming predictions of existential threats to humanity. While there are many risks associated with the new technologies, there are also many benefits. We need to do our utmost to control the risks while ensuring Australia shares in the many benefits. There are good reasons to be concerned, but predictions of the ‘end of humanity’ are overblown.
“Be alert, not alarmed. Instead of being paralysed by fear we should seize the opportunities by investing in training to enable our workforce to best benefit from the power of AI, and invest in research so that we can adapt technologies to serve our national interest.”
Open AI’s ChatGPT Triggered An AI Race
OpenAI’s ChatGPT bot was the first to launch in November last year, triggering an AI race that led Microsoft and Google to launch their own versions earlier this year. There are now dozens of powerful models for language, images, voices and coding.
The Washington Post reports Dan Hendrycks, a computer scientist who leads the Center for AI Safety, explaining how the single-sentence letter was designed to ensure the core message isn’t lost.
“We need widespread acknowledgment of the stakes before we can have useful policy discussions,” Hendrycks wrote in an email. “For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently underemphasized relative to the actual level of threat.”
Hendrycks added that “ambitious global coordination” might be required to deal with the problem, possibly drawing lessons from both nuclear non-proliferation or pandemic prevention. Though a number of ideas for AI governance have been proposed, no sweeping solutions have been adopted.
Earlier this month, G7 leaders meeting in Japan recently agreed that discussions needed to take place and the rules for digital technologies like AI should be “in line with our shared democratic values”.
Many nations and global blocs like the EU are trying to determine how to regulate and rein in the AI race.