$100M Signing Bonuses & The Future of AI
What Sam Altman’s AI Predictions Get Right—and What We Might Be Getting Wrong
Sam Altman says we’ve cracked reasoning.
That artificial intelligence isn’t just going to assist science, but discover it. New laws of physics. Novel insights in biology. Autonomous breakthroughs that humans might not even fully understand.
In a recent interview, Altman laid out a vision for the next 5 to 10 years of AI that’s as staggering as it is surreal. We’re not just talking about better chatbots. He’s predicting:
Virtual employees
Humanoid robots walking down streets
Self-directed scientific discovery
Ubiquitous AI companions that know you better than your family does
And yet, somehow, the world feels the same as it ever was. We work the same. We live the same.
Have We Really Cracked Reasoning?
Altman and others argue that today’s models now demonstrate something akin to reasoning. They can solve complex math problems, write code like top engineers, and perform at a PhD level in narrow domains.
But mimicking expert behavior isn’t the same as understanding.
We’re confusing high-quality autocomplete with insight. Cracking a math problem doesn’t mean the system knows why it matters. It just means it’s gotten very good at recognizing patterns and playing probability games.
What happens when an AI publishes a “true” scientific discovery that no human can verify? Do we take its word for it? Trust the machine?
Seems to me that's not science. That's faith.
Business May Be Harder Than Physics
One of Altman’s most counterintuitive claims is that AI might find it easier to discover new physics than to build an e-commerce business.
Strange until you realize how messy the real world is.
Physics has rules. Business has people. And people are chaotic, emotional, and irrational. Markets fluctuate, tastes shift, and attention spans evaporate.
Still, this framing raises its own concern: Are we outsourcing judgment in exactly the place we need it most?
Science is messy too. Peer review, political agendas, and confirmation bias don’t disappear just because the math is solid. If we treat AI as infallible in the lab but untrustworthy in the market, we are blinding ourselves to where risk truly lies.
Robots, Reality, and the Social Distortion
Let’s talk embodiment. Altman believes we’ll have walking, humanoid robots in 5 to 10 years. And when that happens, he says, it’ll feel like the real arrival of the future.
But robotics experts aren’t so sure. Physical hardware is a much steeper curve than neural networks. Batteries, friction, motor control, real-world unpredictability are things that don’t scale easy.
And it raises a deeper question:
What happens to our social fabric when we start forming emotional bonds with machines that don’t feel them back?
Because once a humanoid robot enters your living room, the uncanny valley will become a psychological minefield.
Superintelligence Might Feel Disappointing
Despite all the advancements, most people’s lives haven’t changed dramatically. You can talk to ChatGPT, sure—but you still go to work, pay bills, and scroll endlessly before bed.
Altman himself admits the most unsettling outcome might be this: AI reaches superintelligence and the world barely notices.
GDP doesn’t skyrocket. Inequality stays put. Civilization just sort of muddles through. Meh? As if intelligence wasn’t the missing ingredient after all.
And maybe it isn’t. Maybe it’s infrastructure. Or ethics. Or leadership. Or our own slowness to adapt.
Meta v. Mission
Here’s one subplot everyone’s talking about: Meta (yes, that Meta) has been offering OpenAI researchers massive signing bonuses—up to $100 million in some cases.
So far, none have taken the bait.
Why? Because culture matters. Mission matters. Copying what someone else already did isn’t innovation. It’s imitation.
And imitation is a losing strategy.
The real war isn’t about who has the best model. It’s about who can keep making the next one. And that takes something no salary can buy.
Altman ends the interview on a haunting note.
He says the scariest possibility isn’t that AI fails—it’s that it succeeds. That we build god-like intelligence and still live like mortals. Still struggle with poverty, inequality, and meaning.
That we birth abundance and somehow still feel scarcity.
So here’s the question:
What if AI gives us everything we asked for but not what we actually need?
Because maybe the challenge ahead isn’t just technical. Maybe it’s philosophical.
And maybe intelligence, in the end, is not the same as wisdom.
Watch on YouTube, or listen on Spotify.