Print the page
Increase font size
The Darker Shades of AI

The Darker Shades of AI

Chris Campbell

Posted January 23, 2024

Chris Campbell

The year was 1962.

Nuclear annihilation seemed probable, edging on inevitable.

Didn’t help that the Cuban Missile Crisis was also one of the first major international events to be covered on television -- which only stoked the flames of fear.

Families across the U.S. and other parts of the world were glued to their sets. The visual nature of TV made the crisis more tangible and emotionally impactful than radio ever could’ve.

Schools conducted regular "duck and cover" drills, and some families built or stocked fallout shelters in their homes, preparing for the worst.

You couldn’t walk 50 feet in public without hearing about it.

Everyone was flying in the dark.

The Fog of War

Problem was, both the US and the Soviet Union faced situations where they had to interpret the intentions and capabilities of the other side.

The fog of war was thick.

The great risk was that a wrong interpretation would lead to a nuclear exchange.

The worry was far from unwarranted.

Back in 1914, after the assassination of Archduke Franz Ferdinand, there was also a flurry of confusion.

Leaders and diplomats misread each other’s intentions, causing them to rush into war.

Take, for example, the famous “Blank Check” assurance that Germany gave Austria, signaling unconditional support. Germany thought it would act as a deterrent. Instead, Austria-Hungary took it as a greenlight to attack.

Russia’s decision to mobilize its army in 1914 was also meant as a deterrent against Austrian aggression. Germany interpreted this as an act of aggression.

These are only two examples of many that flung the crisis into world war.

Now, the rub…

Both cases -- the Cuban Missile Crisis and WWI -- were shaped by the technology at the time.

Although we’re excited about tech’s future…

It would be crazy not to at least consider the ramifications of such a powerful technology like AI on the global stage.

In today’s featured piece below, our Paradigm colleague Jim Rickards reveals the potentially darker shades of AI…

How AI has already found its way into nuclear warfare…

What Ukraine and Gaza have to do with it…

And what he’s worried about.

Check it out below.

Could AI Start a Nuclear War?

Jim Rickards

AI in a command-and-control context can either malfunction and issue erroneous orders as in Fail Safe or, more likely, function as designed yet issue deadly prescriptions based on engineering errors, skewed training sets or strange emergent properties from correlations that humans can barely perceive.

Perhaps most familiar to contemporary audiences are the failed efforts of the president and Col. Grady’s wife to convince the bomber commander to call off the attack. Grady had been trained to expect such efforts and to treat them as deceptions.

Today, such deceptions would be carried out with deepfake video and audio transmissions. Presumably, the commander’s training and dismissal of the pleas would be the same despite the more sophisticated technology behind them. Technology advances yet aspects of human behavior are unchanged.

Another misunderstanding, this one real not fictional, that came close to causing a nuclear war was a 1983 incident codenamed Able Archer.

The roots of Able Archer go back to May 1981 when then General Secretary of the Communist Party of the Soviet Union Leonid Brezhnev and KGB head Yuri Andropov (later general secretary) disclosed to senior Soviet leaders their view that the U.S. was secretly preparing to launch a nuclear strike on the Soviet Union.

Andropov then announced a massive intelligence collection effort to track the people who would be responsible for launching and implementing such an attack along with their facilities and communications channels.

At the same time, the Reagan administration began a series of secret military operations that aggressively probed Soviet waters with naval assets and flew directly toward Soviet airspace with strategic bombers that backed away only at the last instant.

These advances were ostensibly to test Soviet defenses but had the effect of playing to Soviet perceptions that the U.S. was planning a nuclear attack.

Analysts agree that the greatest risk of escalation and actual nuclear war arises when perceptions of the two sides vary in such a way as to make rational assessment of the escalation dynamic impossible. The two sides are on different paths making different calculations.

Tensions rose further in 1983 when the U.S. Navy flew F-14 Tomcat fighter jets over a Soviet military base in the Kuril Islands and the Soviets responded by flying over Alaska’s Aleutian Islands. On Sept. 1, 1983, Soviet fighter jets shot down Korean Air Lines Flight 007 over the Sea of Japan. A U.S. Congressman was onboard.

On November 4, 1983, the U.S. and NATO allies commenced an extensive war game codenamed Able Archer. This was intended to simulate a nuclear attack on the Soviet Union following a series of escalations.

The problem was that the escalations were written out in the war game briefing books but not actually simulated. The transition from conventional warfare to nuclear wargame was simulated.

This came at a time when the Soviets and the KGB were actively looking for signs of a nuclear attack. The simulations involving NATO Command, Control and Communications protocols were highly realistic including participation by German Chancellor Helmut Kohl and UK Prime Minister Margaret Thatcher. The Soviets plausibly believed that the war game was actually cover for a real attack.

In the belief that the U.S. was planning a nuclear first-strike, the Soviets determined that their only course to survive was to launch a preemptive first strike of their own. They ordered nuclear warheads to be placed on Soviet Air Army strategic bombers and put nuclear attack aircrafts in Poland and East Germany on high alert.

This real life near nuclear war had a backstory that is even more chilling. The Soviets had previously built an early warning radar system with computer linkages using a primitive kind of AI codenamed Oko.

On September 26, 1983, just two months before Able Archer, the system malfunctioned and reported five incoming ICBMs from the United States. Oko alarms sounded and the computer screen flashed “LAUNCH.” Under the protocols, the LAUNCH display was not a warning but a computer-generated order to retaliate.

Lt. Col. Stanislov Petrov of the Soviet Air Defense Forces saw the computer order and had to immediately choose between treating the order as a computer malfunction or alerting his senior officers who would likely commence a nuclear counterattack.

Petrov was a co-developer of Oko and knew the system made mistakes. He also estimated that if the attack were real, the U.S. would use far more than five missiles. Petrov was right. The computer had misread the sun’s reflection off some clouds as incoming missiles.

Given the tensions of the day and the KGB’s belief that a nuclear attack could come at any time, Petrov risked the future of the Soviet Union to override the Oko system. He relied on a combination of inference, experience, and gut instinct to disable the kill-chain.

The incident remained secret until well after the end of the Cold War. In time, Petrov was praised as “The Man Who Saved the World.”

The threat of nuclear war due to AI comes not just from the nuclear-armed powers but from third parties and non-state actors using AI to create what are called catalytic nuclear disasters.

The term catalytic refers to chemical agents that cause volatile reactions among other compounds without themselves being part of the reaction.

As applied in international relations, it refers to agents who might prompt a nuclear war among the great powers without themselves being involved in the war. That could leave the weak agent in a relatively strong position once the great powers had destroyed themselves.

AI/GPT systems have already found their way into the nuclear warfighting process.

It will be up to humans to keep their role marginal and data oriented, not decision oriented. Given the history of technology in warfare from bronze spears to hypersonic missiles, it’s difficult to conclude AI/GPT will be so contained. If not, we will all pay the price.

Ukraine, Gaza, and AI all raise the odds of a nuclear war considerably. The financial implications of this for investors are simple.

In case of nuclear war, stocks, bonds, cash and other financial assets will be worthless. Exchanges and banks will be closed. The only valuable assets will be land, gold and silver.

It’s a good idea to have all three — just in case.

AI and the New World Order (The Final Piece)

Posted December 19, 2024

By Chris Campbell

The most exciting, terrifying, and ambitious thing happening on Earth right now.

America Needs AI Agents (Part Two)

Posted December 18, 2024

By Chris Campbell

It’s cute when it’s a chatbot talking to a college kid. It’s not so cute in a nuclear plant.

Warning – Don’t Sell Without Reading This

Posted December 18, 2024

By Davis Wilson

This Experiment Could Make (Or Lose) You A Lot of Money

Make AI Agents Work For You (Part One)

Posted December 17, 2024

By Chris Campbell

The future is already here – AI agents will distribute it.

“Quantum-Proof” Your Bitcoin

Posted December 16, 2024

By Chris Campbell

If you hold Bitcoin, or if you even care about the digital world we live in, this is something you want to understand.

MicroStrategy - The Next “Inclusion Effect” Victim

Posted December 16, 2024

By Davis Wilson

December 23rd could be the top