Will AI Destroy the World?
Posted December 12, 2024
James Altucher
In the past 100 years, the world has almost ended a handful of times.
Not like, “Y2K is going to send us into Mad Max! Beans, bourbon, and bullets!”...
Not like, “The Large Hadron Collider is going to create a black hole and it will consume the Earth!”...
No.
Really. Almost. Ended.
People don’t talk about it enough. I probably don’t think about it enough.
Here’s one of the most famous stories:
Stanislav Petrov, a lieutenant colonel, was sitting in a Soviet bunker, staring at a radar screen screaming "incoming nuclear missiles."
The protocol? Alert the generals, launch the counterstrike, and effectively end life on Earth.
But Stanislav did something unexpected.
He hesitated.
He decided it’s a false alarm—based on nothing more than gut instinct. He disobeyed orders and saved the world.
And thank the stars he did.
But here’s the scary part:
A machine, no matter how sophisticated, would have followed the protocol.
This is the kind of high-stakes scenario I talked about with Jim Rickards on the latest episode of the podcast.
Jim’s a guy I always learn a ton from.
He’s written bestsellers, advised the CIA, and probably forgotten more about financial systems than most of us will ever know.
When Jim talks about AI—especially its role in finance and global security—I consider what he says carefully.
We started with a simple premise: AI is everywhere.
From your phone suggesting what time to leave for a meeting, to hedge funds using it to pick stocks.
It’s embedded into the infrastructure of our lives.
But Jim's take isn’t a doom-and-gloom “robots are coming for your job” story.
Instead, he shows how this technology, while powerful, lacks one critical trait: common sense.
The Myth of AI’s “Intelligence”
Jim was quick to dispel the notion that AI is truly “intelligent.”
At its core, it’s math—powerful algorithms trained on massive datasets. It can identify patterns, mimic human behavior, and yes, even write an essay that fools high school teachers.
But does it understand? No. It doesn’t know why it’s making decisions, nor can it predict the unintended consequences of those decisions.
That’s where humans still have an edge.
Case in point: financial markets.
Hedge funds are already using AI to read thousands of earnings reports, crunch numbers, and even pick stocks. And guess what? Some are outperforming their human counterparts.
But Jim points out the flaw: these models are all the same. They’re programmed to react to the same triggers, which means when one fund starts to panic-sell, the rest follow like a stampede.
This kind of self-reinforcing feedback loop could trigger the next big crash.
And it’s not just the stock market.
Jim explains how the same lack of diversity in AI systems could amplify risks in other areas, from logistics to national security.
A World Without Humans in the Loop
Then, we started talking about the terrifying: nuclear war.
During the Cold War, human intuition saved us more than once.
What happens if we take humans out of the loop and let AI make those decisions?
Can systems misinterpret data—like a cloud reflecting sunlight—and trigger catastrophic responses? Sure.
Without a human to say, “Wait a second, this doesn’t feel right,” AI could take us up the escalatory ladder to nuclear annihilation.
So, what’s the solution?
Jim suggests two key things.
First, invest in cybernetics—systems designed to regulate themselves, like tapping the brakes on an icy road instead of slamming them.
Second, diversify your portfolio with non-digital assets like gold, silver, and land.
These are immune to the cascading failures of AI-driven systems.
And then there’s the bigger question: What happens when AI starts replacing jobs faster than we can create new ones?
Jim doesn’t mince words here.
(In short, he says, the potential for dystopia is big.)
But you’ll have to listen to the full podcast to hear his uncensored take.