AI: Non-Human Intelligence - Overhyped, or Underhyped?
In 2016, something strange happened on a Go board.
A photo was taken. Quiet, unassuming. But if you zoom in, it’s the moment the Earth shifted. And most people didn’t notice.
It was a move no human had ever seen in a game that had been played for over 2,500 years. Not by chance. Not by randomness. But by design.
A machine, AlphaGo, created it. And by doing so, it kicked off what former Google CEO Eric Schmidt calls the most important development in 500 years: the rise of non-human intelligence. 500 years, or maybe ever?
At the time, most people didn’t understand what had just begun. But Schmidt and his colleagues did. This wasn’t just about beating a game. It was about creating something new, an algorithmic move no human had ever conceived.
It posed a haunting question: What else might machines see that we can't?
Since then, AI has leapt far beyond board games. Tools like ChatGPT captured our imagination by speaking fluently, mimicking human expression, and writing everything from emails to novels. But that’s not even the revolutionary part.
Behind the scenes, the real leap forward has been in reinforcement learning; AI that doesn’t just react but plans. Tools like DeepSeek and OpenAI’s models now simulate strategic thinking. They look ahead, revise, and move forward with purpose.
Schmidt describes using these systems to write research papers in minutes, burning through compute power equivalent to tens of millions in infrastructure.
We’ve moved from language… to planning… to AI agency?
That will change everything.
But let’s talk reality: AI systems consume massive amounts of power.
According to Schmidt’s testimony to Congress, the U.S. will need an additional 90 gigawatts of electricity just to support future compute demands. That’s the equivalent of building 90 new nuclear power plants. (For reference: we're building… none.)
India and the Middle East are already constructing multi-gigawatt data centers. Think entire cities worth of electricity—just for machines.
And even if we power them, we’ve already burned through most of the public internet as training data. The next phase? Machines generating synthetic data for other machines.
Even that may not be enough.
Can AI Truly Invent?
Here’s the philosophical problem: AI can remix existing knowledge, but can it invent? Can it make the leap from known facts to radical discoveries?
Einstein didn’t just crunch numbers—he saw parallels across disciplines and invented new ways of thinking. AI, for now, can’t do that.
But researchers are chasing that holy grail. It’s called non-stationarity of objectives—teaching AI to adapt when the rules change, like humans do.
If they crack it, we may unlock entirely new schools of science. But we’ll also need exponentially more power… and oversight.
Should We Hit Pause?
AI “gurus'“ like Yoshua Bengio argue that we should halt the development of autonomous AI agents, those systems that act independently.
Schmidt disagrees. In a globally competitive world, stopping AI isn’t realistic. Guiding it is.
He lays out red lines:
No recursive self-improvement without supervision
No direct access to weapons (too late!)
No self-replicating agents
But those boundaries require one thing: observability. We must be able to watch what these systems are doing—especially when they stop speaking human languages.
Geopolitics
Here’s where things get geopolitical.
The U.S. is building closed models. China is leaning into open-source AI.
That sounds egalitarian—until dangerous capabilities become widely accessible. Schmidt warns of a future where adversaries weaponize open models, even advocating preemptive strikes to slow AI rivals.
A chilling scenario:
If your adversary is six months ahead on the path to superintelligence, you might do whatever it takes to stop them. Including sabotage. Including… preemptive strikes, or war.
These conversations are already happening in serious national security circles.
Another paradox: to keep AI safe, we may have to build tools that look like a surveillance state.
Proof-of-personhood. AI moderation. Identity verification.
Schmidt urges caution: we can preserve human freedom—if we use cryptographic techniques like zero-knowledge proofs. The challenge is ensuring the tools meant to protect us don’t become tools of oppression.
Despite the risks, Schmidt remains an optimist.
What if we cured all known diseases? What if we gave every child a personalized tutor in their own language? What if overburdened village doctors had world-class diagnostic AI in their pocket?
These things are technically possible today. The only thing missing is the will and the economic incentive to build them.
We’re not talking about far-off sci-fi. We’re talking about choices we could make right now.
If agentic AI succeeds at scale, we could see 30% productivity increases per year.
No economic model even knows how to handle that.
The challenge will be managing abundance, not scarcity.
Ride the Wave—Every Day
This is a marathon, not a sprint. If you’re not using these tools, no matter what you do, you’re falling behind. Artists. Doctors. Teachers. Builders. Coders.
This isn’t just a technology shift. It’s a civilizational one.
Adopt it. Adapt to it. Build with it. And don’t look away.
The age of non-human intelligence has arrived—quietly, but unmistakably.
Let’s make sure we don’t screw it up.
Revolutionize your workflow with the PLAUD NotePin — a wearable AI notetaker powered by GPT-4.1 and Claude 3.7. Transcribe, summarize, and stay organized with just one tap. 🎤 Support the channel (affiliate link) and check it out here.