
Nvidia: “Let There Be Light”
Posted April 17, 2026
Chris Campbell
Jensen Huang just wrote two checks.
$2 billion to Lumentum. $2 billion to Coherent.
Two companies most people have never heard of. Two companies that don't make chips.
Two companies that make light.
And if you've been following the AI infrastructure thesis we've been building out—this is the moment where everything clicks.
And it’s happening at 186,000 miles per second.
Copper Is Cooked
Every headline in AI points at chips.
Faster chips. Smaller transistors. The 3-nanometer wall.
BUT…
The engineers actually building the next generation of AI clusters keep talking about something else entirely.
Wires.
Right now, the most advanced AI training systems run tens of thousands of GPUs in constant conversation—sharing partial answers back and forth, billions of times per second, until the model finds something it likes.
And what carries all those signals? Copper traces.
Electrical signals moving through metal. The same basic technology we've been using since the telegraph.
At the speed these systems require, copper hits a wall.
It generates heat. It loses signal over distance. And every bit of data it moves costs energy—tiny per bit, but enormous at scale.
The analogy: pushing electrons through copper is like pushing water through a straw.
You can widen it. Push harder. But friction is a physical property of the medium itself. No engineering decision removes it.
Let There Be Light
A photon is a particle of light. The smallest possible unit of it.
If light is water, a photon is a single molecule.
Light carries no charge. Generates no heat. Doesn't degrade over distance.
And here's the wild part—you can send dozens of independent data streams through a single optical channel at once, each riding a different color of light, no interference between them.
Like lanes on a highway, except every lane runs at the speed of light.
This physics has existed for decades in telecom.
What's happening now is engineers are building it down to the scale of a chip. Down to the connection between two GPUs sitting centimeters apart.
The chips are doing their job. The wires are the constraint. And replacing those wires with light now has $4 billion of Nvidia's money behind it.
Here's the number that matters…
MIT found that photonic chips can do the core math behind every AI model—called “matrix multiplication”—at the speed of light, without ever converting to an electrical signal.
Potential energy reduction? Orders of magnitude lower than traditional chips. We're talking about a millionfold improvement in energy per operation.
There’s a lot of talk about model improvements delivering 3%, 5%, maybe 10% efficiency gains.
At scale, those matter. A lot.
But photonics is in an entirely different universe.
Even if real-world numbers land at 50% or 60%—consider what that means when AI energy consumption is becoming a grid-level policy concern for national governments.
And here's why I think Jensen’s $4B investment isn't guesswork:
Nvidia has been cautious about photonics in the past.
Jensen stuck with copper on their latest server hardware because it saved energy. He picked the boring option when the boring option worked.
The $4 billion tells you he thinks copper’s on its way out.
Markets read it the same way. Lumentum closed up nearly 12% on the announcement. Coherent jumped 15%.
The Next Substrate Shift
Every major hardware transition in computing—multi-core, GPUs, cloud—created a new class of winners.
The companies that understood the new substrate before most people knew it existed shaped what came next.
Photonics is that next shift.
The AI you'll use in 2028 may work incomprehensibly better for one reason: the wires finally got out of the way.
Four billion dollars on light. That's the signal. We're tracking the most asymmetric names behind it.
More soon.
