AI Could Lead to Human Extinction: Experts


Recently, tech leaders including those from the Google, Microsoft, hundreds of AI scientists and researchers, including including OpenAI chief executive Sam Altman, issued a warning “perils that artificial intelligence poses to humankind”, the Associated Press reports.

”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

– In a statement signed by many of the industry’s leaders

The AP News adds “Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. It has sent countries around the world scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.”

The AP, quoting the Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, writes:

  • ”There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority. So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other.
  • “Nobody is saying that GPT-4 or ChatGPT today is causing these sorts of concerns. ‘We’re trying to address these risks before they happen rather than try and address catastrophes after the fact.
  • ”Given our failure to heed the early warnings about climate change 35 years ago, it feels to me as if it would be smart to actually think this one through before it’s all a done deal.

The letter was reportedly signed by experts in nuclear science, pandemics and climate change, the AP writes.

David Krueger, an assistant computer science professor at the University of Cambridge, is also quoted saying:

  • “Some of the hesitation in speaking out is that scientists don’t want to be seen as suggesting AI ‘consciousness or AI doing something magic,’ but AI systems don’t need to be self-aware or setting their own goals to pose a threat to humanity.
  • ”I’m not wedded to some particular kind of risk. I think there’s a lot of different ways for things to go badly. But I think the one that is historically the most controversial is risk of extinction, specifically by AI systems that get out of control.

Cynthia Rudin, a computer science professor and AI researcher at Duke University, recently told CNN: “Do we really need more evidence that AI’s negative impact could be as big as nuclear war?”