Jeffrey Hinton’s AI Alarm: Smarter Than Us, Sooner Than You Think
- Craig Wilson

- Nov 2
- 3 min read
When one of the most celebrated minds in artificial intelligence starts sounding the alarm, it’s time to listen. Geoffrey Hinton, often called the Godfather of AI and winner of the 2023 Turing Award (often referred to as a "Nobel for computing"), is shifting his focus from developing artificial intelligence to issuing urgent warnings about its dangers.
In a recent interview transcript shared by the BBC, Hinton compared the rise of AI to the arrival of an alien species—only this time, we’re building them ourselves. “Suppose some telescope had seen an alien invasion fleet arriving in ten years,” Hinton remarked. “We would be scared—and doing something about it.” Instead, he says, we're constructing our replacements, and most people still aren't acting like it's a problem.
Hinton believes the dominant model for AI control—humans as masters commanding obedient machine assistants—is not only naive, it's dangerous. “That’s just the wrong model,” he argues. “It’s not going to be like Star Trek, where you just say ‘make it so.’” Instead, he proposes a more humble analogy: we must think of ourselves as babies, needing to guide and influence a far more intelligent caregiver. The question is whether AI will “care” about us at all.
Corporate Competition and Misaligned Incentives
Hinton doesn't hesitate to name names. He praises DeepMind, Google, and Anthropic for at least acknowledging the existential risks of superintelligence. But he’s critical of others, including Meta and OpenAI, where he says safety priorities are eroding, with key researchers leaving in frustration.
The real issue? The race to AI dominance is overshadowing efforts to ensure its safe development. “They are much more concerned about the race,” Hinton warns. “They should be much more concerned about whether humanity will survive it.”
Economic Fallout: A Trillion-Dollar Job Killer?
While some economists remain optimistic that AI will create new jobs—as happened in previous industrial revolutions—Hinton isn’t convinced. This time, he argues, automation may outpace our ability to adapt. “You used to dig ditches, now you answer phones. But now even those jobs are going.” Companies pouring billions—perhaps trillions—into AI infrastructure are largely doing so with profit, not social good, in mind.
Amazon’s recent decision to cut 4% of its workforce, potentially influenced by AI efficiencies, illustrates a broader trend: the very technology that drives stock market surges and tech sector booms may soon leave millions behind.
Global Risks and Unlikely Unity
Despite geopolitical rivalries, Hinton sees a rare opportunity for global cooperation—at least on one front. “No one wants AI to take over,” he explains. “Not the Chinese Communist Party, not Donald Trump.” While nations disagree on everything from cybersecurity to bioengineering, there’s consensus on one thing: AI shouldn’t be in charge.
Would He Stop AI If He Could?
Hinton hesitates. AI could revolutionise healthcare and education. But the societal structures surrounding it—the ones that allow wealth concentration at the top while automating away livelihoods—are what concern him most. “That’s not on AI,” he says, “that’s on how we organize society. Musk will get richer and a lot of people will get unemployed—and Musk won’t care.”
Waiting for a Chernobyl Moment
Perhaps the most chilling part of Hinton’s assessment is the suggestion that humanity might need a catastrophe—a failed AI takeover, a near-miss on a digital arms race—to take the threat seriously. “Some people say our best hope is for AI to try to take over and fail. We need something to scare the s*** out of us.”
Let’s hope we don’t need such a moment.




Comments