
CES: Things Get Physical in Vegas
Posted January 08, 2026
Chris Campbell
AI is leaving the screen.
So far, that’s one of the clearest takeaways from CES in Vegas.
Where’s it going, you ask? It’s embedding itself in the hardware that quietly runs the world.
Into vehicles, factories, power management, networking gear, security systems, robotics, and data-center control planes.
That migration reshapes the hierarchy—software drives behavior, but infrastructure defines the limits.
Paradigm is in Vegas this week, with Davis Wilson of The Million Mission on the CES floor tracking the real signals.
I reached him earlier today in the middle of Nvidia’s CES showcase.
“GPUs rightfully steal the spotlight,” he said. “Demand is off the charts and nothing else comes close in terms of efficiency.”
Yet, as he explained, that’s only part of the story…
“Nvidia isn’t just a hardware company anymore,” Davis went on. “They’ve built a software stack that makes their chips more valuable and much harder to replace. Walking the showroom floors it’s eye-opening how many companies are using Cosmos/Omniverse to simulate real-world environments, train AI systems, and test robots and autonomous tech in virtual worlds before anything ever hits the physical world.
“I've said before that the speed of tech innovation has greatly increased over the last 2 years. Simulation tech created by Nvidia is a big reason for this.”
That shift—into simulation-first development—is no doubt changing how quickly ideas move from concept to reality.
But, while it has the potential to compress years of iteration into weeks…
It also creates a new kind of pressure.
And… a new kind of opportunity.
Where the Fun Ends
Virtual environments operate without friction.
You can spin up a thousand simulated factories, vehicles, or robots in a lonely evening. You can rewind failures, fork outcomes, stress-test edge cases, and retrain systems continuously.
Time, cost, and physical risk? Puh! Relics of the past.
But, of course, physical systems themselves live under different laws. And simulation-led physical AI only amplifies those constraints.
Data still has to move between chips. Signals have to arrive on time. Power has to be delivered without overheating. Components have to stay synchronized across distance and scale.
The faster you can design and train, the more demanding deployment becomes.
Latency matters more. Energy efficiency matters more. Reliability matters more. Small inefficiencies that were once tolerable can quickly become very expensive learning opportunities.
Most don’t realize this, but…
Even today’s most advanced AI data centers still depend on copper wiring to connect chips. And moving electrons through copper creates heat at exactly the wrong places—between chips, across racks, inside dense clusters.
As models scale and simulations grow more complex, that legacy infrastructure starts acting its age. Latency rises. Power costs explode. Performance stalls.
Which explains something James has been watching closely. Something he expects to get very real this week during CES.
Beneath the headlines and product demos, capital has already been quietly flowing toward companies that sit in this overlooked layer…
The infrastructure that determines how fast AI can actually scale in the real world.
And as that reality sinks in, the pipes start looking a lot more interesting than the faucets.
That’s happening right now, in real time. In Vegas.
According to the signals we’re getting from CES and what James’ AI system is flagging…
As soon as this Friday, Jensen could step out with news involving a lesser-known California-based tech company. Maybe a buyout. Maybe something adjacent.
What we do know is this: when these signals show up, it usually means something is about to move.
James explains everything in a new video from his home office right here.
