top of page
AdobeStock_1417313402.jpeg

The Inevitable Future: Why Former AI Leaders Are Sounding the Alarm

  • Writer: Craig Wilson
    Craig Wilson
  • Jul 20
  • 2 min read

If you thought the AI revolution was peaking, think again. According to former OpenAI co-founder Ilya Sutskever and ex-Google CEO Eric Schmidt, we haven’t seen anything yet. In fact, they warn we’re hurtling toward a future so radically transformed by artificial intelligence that society itself may struggle to keep up.


Their message? The age of superintelligence isn’t a distant sci-fi horizon. It’s already taking shape.


The Biological Computer Argument


Speaking at the University of Toronto, Sutskever urged the world to grasp a difficult truth: AI will eventually be able to do everything the human brain can—because the brain itself is just a biological computer. He believes that AI won’t merely replace specific job categories—it will absorb the entire spectrum of human learning and action. From students to scientists, from artists to architects, no role is immune.


But what comes after that?


Recursive Self-Improvement & Superintelligence

Eric Schmidt describes a near future where AI agents improve themselves, write their own code, plan step-by-step actions, and collaborate—or even negotiate—with other agents to achieve goals. These aren't just productivity hacks. Schmidt sees the birth of Artificial General Intelligence (AGI) in three to five years, with Artificial Superintelligence (ASI) following soon after.


That means intelligence greater than all of humanity—in a single machine.


The San Francisco Consensus & A Global Wake-Up Call


Schmidt calls this vision the “San Francisco Consensus,” where leading researchers expect AGI by the end of the decade. But while engineers press ahead, political systems remain decades behind. Society is underprepared, laws are outdated, and most of the world still sees AI as novelty—not necessity.


What Now?


Both Sutskever and Schmidt urge one thing above all: Engagement. This isn't the time for passive observation. It’s a moment for humanity to ask the hard questions: What values will we encode into these systems? Who will control them? What guardrails will be in place—if any?


The AI era is not coming. It’s here. And whether we’re ready or not, it will shape the world we inherit.

 
 
 

Comments


Follow Me On:

  • Youtube
  • LinkedIn
  • Instagram
  • Facebook
  • X

© 2025 by CRAIGWILSON.AI  All Rights Reserved.

bottom of page