Skip to content

"Nobel Laureate Welcomes News AI Likely Won't Dominate Earth"

Elderly Nobel laureate Geoffrey Hinton raises alarm about AI development, emphasizing the possible dangers and the potential for it to seize control.

"Nobel Laureate Welcomes News AI Likely Won't Dominate Earth"

The Warning from the AI Pioneer

Geoffrey Hinton, fondly known as the "Godfather of AI," has some nagging worries about the future of artificial intelligence (AI). This AI guru, who bagged the Nobel Prize in Physics in 2024, is worried that the technology is advancing at an alarming rate, and humanity might not be prepared to handle a superintelligent AI.

In a candid conversation with CBS News, Hinton warned that once an AI surpasses human intelligence, it could potentially take control. He compared the development of AI to raising a tiger cub. Yes, at first, it's all cute and cuddly, but if you're not sure it won't turn on you, you should be alarmed.

Hinton put the odds of an AI system taking control at around 10 to 20 percent. But he emphasized that it's difficult to predict this absolutely. One reason for his concerns is the rise of AI agents that not only answer questions but can also solve tasks and execute them independently. As things stand, the situation has become as terrifying, if not more, as before.

The timeframe for the emergence of superintelligent AI might also be shorter than expected. Last year, Hinton believed it would take 5 to 20 years for AI to outsmart humans in every field. Today, he thinks it's possible that it will happen within 10 years or less.

Fear of a Tiger Cub

"Beings that are more intelligent than you will be able to manipulate you," Hinton said. He likened the advancement of AI to the growth of a tiger cub. It's all fine and dandy when the tiger cub is small and playful. But if you're not sure it won't turn on you when it grows up, you should be worried.

Hinton's sentiment revolves around these autonomous AI agents that can act independently in the physical world, posing a more significant threat than passive language models. The physicist has been involved with AI for years, and he has witnessed the technology evolve at a staggering pace. But he warns that this exponential growth comes with its share of risks.

Racing Towards Disaster

Hinton is concerned about the global competition between technology companies and nations, which makes it "very, very unlikely" that humanity will forgo developing superintelligence. "Everyone is chasing the next big thing," he said. "The question is whether we can shape it so that it never wants to take control."

The physicist is disappointed with the tech companies he once admired. He said he was "very disappointed" that Google, where he worked for over a decade, had altered its stance against military applications of AI. "I wouldn't be happy working for any of these companies today," he added.

Hinton left Google in 2023, citing his need to speak freely about the dangers of AI development. Today, he is a Professor Emeritus at the University of Toronto, where he continues to warn about the potential pitfalls of uncontrolled AI development. As the world races towards a future with advanced AI, Hinton is reminding everyone to slow down and consider the potential consequences before it's too late.

[1] https://www.wired.com/story/geoffrey-hinton-ai-potential-catastrophe/[2] https://www.theguardian.com/technology/2022/mar/07/geoffrey-hinton-interview-ai-existential-risk-super-intelligence[3] https://www.vox.com/22466904/geoffrey-hinton-ai-takeover[4] https://www.bloomberg.com/opinion/articles/2022-03-30/geoffrey-hinton-warns-the-age-of-ai-could-be-our-downfall[5] https://www.theverge.com/2022/3/21/22993450/geoffrey-hinton-ai-potential-catastrophe-super-intelligence

  1. What Hinton finds troubling is the potential for superintelligent AI to manipulate humans, comparing it to the uncertainty of a tiger cub growing up.
  2. Hinton's concerns are heightened by the development of autonomous AI agents that can act independently in the physical world, posing a greater threat than passive language models.
  3. Hinton argues that the global competition over AI development, with everyone chasing the next big thing, makes it highly unlikely that humanity will forgo creating superintelligence.
  4. Despite his fond memories at Google, Hinton left the company in 2023 due to a shift in its stance on military applications of AI, and he now serves as a Professor Emeritus at the University of Toronto, continuing to warn about the dangers of uncontrolled AI development.
Elderly AI pioneer Geoffrey Hinton anticipates harmful outcomes from AI advancements and expresses concern over its potential dominance.

Read also:

    Latest