‘Global Priority:’ AI Industry Leaders Warn of ‘Risk of Extinction’
More than 350 executives, researchers, and engineers from leading artificial intelligence companies have signed an open letter cautioning that the AI technology they are developing could pose an existential threat to humanity.
The New York Times reports that over 350 executives, researchers, and engineers from top AI businesses have signed an open letter as a warning to the world that the AI technology they are building may be a threat to humanity’s existence.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads the statement released by the Center for AI Safety, a nonprofit organization. The signatories include top executives from OpenAI, Google DeepMind, and Anthropic, among others.
Some of the well-known signatories include Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic. Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their groundbreaking work on neural networks, have also signed the letter.
The open letter comes at a time when worries about the possible negative effects of artificial intelligence are on the rise. Recent developments in “large language models”—the kind of AI system used by ChatGPT and other chatbots—have stoked concerns that AI may soon be used at scale to disseminate false information and propaganda or that it may eliminate millions of white-collar jobs.
Altman, Hassabis, and Amodei recently met with Vice President Kamala Harris and President Joe Biden to discuss AI regulation. Following the meeting, Altman testified before the Senate and urged for government control of AI due to the possible risks it poses, warning that the risks of advanced AI systems were serious enough to warrant government intervention.
The open letter served as a “coming-out” for some business leaders who had previously voiced concerns about the dangers of the technology they were building, but only in secret, according to Dan Hendrycks, executive director of the Center for AI Safety. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
The letter also refers to an idea put out by OpenAI officials for the prudent administration of potent AI systems. They demanded collaboration between the top AI developers, increased technical investigation into complex language models, and the establishment of an international agency for AI safety, akin to the International Atomic Energy Agency, which aims to regulate the use of nuclear weapons.
“I think if this technology goes wrong, it can go quite wrong,” Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan
usechatgpt init success
Comments are closed.