Tech
BREAKING: How AI will lead to human extinction – 350 top AI leaders warn
A poignant open letter, signed by hundreds of esteemed AI professionals, expresses profound concern that the continued development of artificial intelligence could eventually present an existential threat to humanity.
This grave assertion comes from the Center for AI Safety, a non-profit organization dedicated to safeguarding human society against potential AI-induced risks.
Among the 350 signatories of the letter are notable figures such as Sam Altman, CEO of OpenAI, Demis Hassabis, Google DeepMind CEO, and Dario Amodei, CEO of Anthropic. The letter underlines the imperative of prioritizing the mitigation of potential extinction risks from AI on a global scale, ranking it alongside severe threats such as pandemics and nuclear warfare.
In recent months, expert voices have increasingly cautioned about the harmful possibilities of generative AI. They highlight potential misuse of the technology for rapidly disseminating misinformation, and the threat it could pose to a significant number of white-collar jobs by rendering them obsolete.
Earlier in the year, over a thousand notable tech leaders, including Tesla CEO Elon Musk, took a stand by endorsing a letter that urged a six-month moratorium on the development of advanced AI models. This development underscores the escalating concern within the tech industry about the potential repercussions of unchecked AI growth.
The statement released on Tuesday included other major backers from the AI industry, including Microsoft Chief Technology Officer Kevin Scott and OpenAI Head of Policy Research Miles Brundage.
Addressing the brevity of the 22-word statement released on Tuesday, the Center for AI Safety said on its website: “It can be difficult to voice concerns about some of advanced AI’s most severe risks.”
“The succinct statement below aims to overcome this obstacle and open up discussion,” the Center for AI Safety added.