Print the page
Increase font size
DeepSeek Didn’t Steal Harry Potter… It’s Worse.

DeepSeek Didn’t Steal Harry Potter… It’s Worse.

Chris Campbell

Posted January 27, 2026

Chris Campbell

Last year, a group of AI researchers asked an AI model to reproduce a Harry Potter book.

It did.

Not a summary. Not a remix. The whole thing.

Almost word for word.

A lot of people heard about this study and came to the same conclusion: “The machines are stealing books.”

But that’s not what the researchers, who published their study last year, found.

pub

What they discovered is, in fact, a lot crazier.

And it has everything to do with where AI is headed next and one familiar name AI companies love to hate: DeepSeek.

Today, let’s look at what DeepSeek just pulled off… what it’s screaming from the rooftops to AI investors… and where the proverbial puck is headed.

How AI Actually Works

First thing first…

Contrary to popular thought, AI models don’t “remember” anything specific. They don’t copy books and stash them somewhere. There’s no file, no pages, no retrievable text.

Instead of storing facts in a database, everything an AI “knows” is spread across billions of tiny numbers called weights.

Each weight is just a dial that says how much one thing should influence another.

When you ask a question, the model spins all those dials at once and reconstructs the answer from scratch, every single time, using a lot of computation.

So, in short… 

Instead of reading Harry Potter back, it burnt a huge amount of compute to slowly reconstruct the text from statistical patterns it learned during training.

Sure, it works. But it’s also wildly inefficient and expensive.

The good news: inefficiency is where breakthroughs hide.

Deepseek Did It Again

Of course, this efficiency issue isn’t anything new.

Researchers have known for years that large language models rely heavily on simple patterns—common phrases, repeated word sequences, and basic language statistics that show up again and again in text.

In fact…

If you strip away the hype, a shocking amount of what these models do looks like advanced autotext.

DeepSeek, the infamous Chinese model, recently decided to stop fighting that reality and design around it.

Their latest insight is simple: If a model keeps rediscovering the same language facts again and again, why not give it a cheap, built-in way to remember those strings of facts?

Not bolted on. Not an external database. Not a hack. Inside the architecture itself.

Think of it like this…

Your Brain Already Does This

Big, complex systems eventually learn the same lesson: don’t use precious, finite power on things you already know.

Brains figured this out first a long time ago.

You don’t consciously think about how to string together sentences or tie your shoes. You don’t even think about driving 80% of the time. Unless something unexpected happens, it’s automatic. (Cops know this as “highway hypnosis.”)

The knowledge to do these things lives in memory, freeing up thinking for harder problems.

It makes sense.

When solving difficult problems, you don’t want to waste mental energy remembering that Paris is the capital of France or how common phrases usually end. You want those things handled automatically, so your brain can focus on the hard stuff.

Computers evolved in the same way. Early machines recalculated everything. Modern ones separate storage from processing, so the CPU isn’t wasting time rediscovering what’s already saved.

AI is reaching that point now.

Most models still use expensive computation to recreate basic facts and patterns over and over.

But the shift underway now looks like this: give AI cheap memory for paths it’s already traveled, and save the heavy thinking for what actually requires thought.

That’s what DeepSeek built.

They added a new internal component—basically a fast, learned memory—that:

  • stores extremely common language patterns and factual snippets
  • retrieves them instantly during generation
  • costs almost no compute to use

The model still reasons. Still thinks. Still uses Transformers.

It just doesn’t have to keep reinventing the wheel.

Why This Is a Bigger Deal Than It Sounds

This change does three important things.

First, it frees up compute. Less wasted effort means more room for reasoning, math, code, and long-horizon thinking.

Second, it scales differently. Instead of only scaling models by adding more layers, you can scale memory capacity cheaply. That’s a new axis of progress.

Third, it changes the economics. If you can get better performance without massively increasing inference costs, suddenly:

  • smaller models punch above their weight
  • reasoning-heavy tasks get cheaper
  • AI systems become more deployable, not less

That’s a signal.

The Big Picture for AI Investors

Deepseek’s latest breakthrough is a sign of where the field is heading.

The first phase of AI was brute force: More data. Bigger models. More compute.

The next phase is architectural efficiency: Separating memory from reasoning. Letting machines recall facts cheaply and think deeply when it matters.

Brains did it. Computers did it. AI is doing it now.

And when efficiency jumps, markets tend to underestimate what comes next.

A Truly Stupid Stock

Posted February 03, 2026

By Chris Campbell

Give it up for Palantir.

Trump. Musk. Nvidia.

Posted February 02, 2026

By Chris Campbell

Trump. Musk. Nvidia. These three centers of gravity are pulling toward the same point—and Paradigm is going ALL IN before the rest of the world notices.

Crypto Gets Warshed!

Posted January 30, 2026

By Chris Campbell

Warsh’s opinions aren’t background noise. Annoying as that may be, we actually have to listen. And inconveniently, they’re saying something useful.

Crypto’s Quiet Standoff

Posted January 29, 2026

By Chris Campbell

Somehow, crypto feels overlooked and overcooked at the same time. Here’s why.

This AI Feels… Illegal

Posted January 26, 2026

By Chris Campbell

No one got rich on Napster—but plenty got rich because of it. The same pattern is playing out in AI.

The Greenlandification of Everything

Posted January 23, 2026

By Chris Campbell

Greenland stands out because it lives permanently at that edge. But the edge is expanding. We’re all Greenland now.