fbpx
BETA
v1.0
menu menu

Log on to your account

Forgotten password | Register

Welcome

Logout

OpenAI CEO Sam Altman on the Dangers of Artificial General Intelligence (AGI)

6th May 2023 | 11:55pm
OpenAI CEO Sam Altman on the Dangers of Artificial General Intelligence (AGI)

According to OpenAI CEO Sam Altman, Large Language Models (LLMs), like GPT-4, are part of the way toward building artificial general intelligence (AGI). He said as much on a recent episode of The Lex Fridman Podcast.

While Altman is hopeful about the potential of future AI models like GPT-10 to come up with small, innovative ideas, he does not consider GPT-4 to be an AGI.

In the interview with Fridman, Altman admits that it would be "crazy" not to be slightly afraid of AGI and does not view it as a negative when people point out his cautious stance toward the technology. His own concerns with the technology involve disinformation problems or economic shocks that could occur at levels far beyond what society is prepared to handle.

Alongside these concerns, Altman acknowledges the impending surge of capable open-source LLMs with minimal to no safety controls. This development underscores the importance of addressing AI safety and updating AI safety work, primarily done before the widespread belief in deep learning and LLMs.

The timeline for AGI remains uncertain, but Altman stresses the importance of discussing and addressing the possibility of AGI posing an existential threat to humanity. He advocates for discovering new techniques to mitigate potential dangers and iterating through complex problems to learn early and limit high-risk scenarios.

Despite the challenges, Altman imagines an unimaginably good standard of living without resorting to utopian idealism. He also touches on the topic of consciousness, admitting that he is open to the idea of a fundamental substrate underlying reality and that something "very strange going on with consciousness."