“AI is Going to Crash the Economy!”
Posted December 13, 2023
Chris Campbell
Will AI crash the economy?
Gary Gensler, head of the SEC, seems to think so.
In the featured article below, our Paradigm colleague Ray Blanco will give you all the details on:
→ Why Gary Gensler, of all people, is freaking out about AI
→ What this could mean for the future of AI
→ And how to invest in the solutions
More in a second.
First…
The Big Day is Upon Us
Every one of our world-class financial experts will reveal their BIGGEST predictions for 2024…
Plus offer up exclusive recommendations for how to profit in the year ahead…
And it’s all happening TODAY at 3PM EST.
Be in your seat at 3:00pm ET on Wednesday, Dec. 13th. (Yes, today.)
Click here to make sure you’re registered and ready to roll.
And read on.
The Next Financial Crisis
Ray Blanco
SEC Chairman Gary Gensler has taken a break from his crusade against cryptocurrency to square off against another emerging technology…
Artificial intelligence.
Specifically, Gensler’s concern is about the widespread use and possible overreliance of AI systems throughout Wall Street. Going as far as to say that AI could lead to the next financial crisis.
The Security and Exchange Commission has started a “sweep”, requesting information and data from major investment firms in order to form their regulatory efforts regarding the use of artificial intelligence on Wall Street.
This sweep builds on the SEC’s July proposal for new rules regarding data analytics and AI systems. The focus of that proposal was to address potential conflicts of interest that are allegedly possible with advanced predictive data analysis methods, like AI.
Gensler said this proposal was meant to address…
“...possibilities that conflicts may arise to the extent that advisers or brokers are optimizing to place their interests ahead of their investors’ interests. When offering advice or recommendations, firms are obligated to eliminate or otherwise address any conflicts of interest and not put their own interests ahead of their investors’ interests.”
While the call-to-action here is basically for firms to review any conclusions drawn from AI-based analysis for possible biases, more dramatic concerns have also been voiced by Gensler.
Earlier in the year he went as far as to say future observers would say, “the crisis in 2027 was because everything was relying on one base level, what’s called [the] generative AI level, and a bunch of fintech apps are built on top of it”.
The reliance of a few specific AI providers could “drive us off an inadvertent cliff”, according to Gensler.
Another concern of the Chairman is something he calls “AI Washing”, or firms over-selling the power of their AI systems. Using “AI” as a buzzword in false marketing.
This latest sweep shows an intention for the SEC to broaden the scope of its AI regulation, but does not mean there is any suspicion of wrongdoing by the firms being questioned.
Balance Of Power
As I briefly mentioned earlier, Gary Gensler has built a reputation of flexing his regulatory muscles arguably to the point of overreach, mostly through his handling of cryptocurrency regulation.
It’s easy to assume he’s doing the same here in regards to AI. But it’s still important to consider the arguments being made by the SEC, especially considering the power of this transformative new technology.
Artificial intelligence is being widely used among major investment firms.
Both BlackRock and JPMorgan Chase run their own AI research groups, while representatives for both Fidelity and Goldman Sachs have made statements acknowledging the potential for using AI in wealth management.
According to Wells Fargo analyst Mike Mayo, “If you’re a bank and don’t have an AI strategy, then you don’t have a strategy”.
While over-regulation and infringing on the free market is an obvious concern, you don’t need to have that good of a memory to recall the potential damage of Wall Street sharing the same shortsighted investment strategy…
The SEC seems to be more interested in spreading out risk than creating blanket limitations on the use of potentially transformative AI systems.
The “cliff” that Gary Gensler is concerned about inadvertently falling from is what he describes as a “monoculture” being formed.
With the amount of resources that go into creating Large Language Models (LLMs), it’s likely that only a very small number of them will emerge from the initial AI frenzy. When that happens, all future AI applications will be built off of the same LLM, which will likely be controlled by a tech titan such as Google or Microsoft.
Firms both large and small would be forced to use the same datasets. Or fall hopelessly behind by not using AI at all…
Fortunately, there are ways to invest in the solutions.
More on that later.
As usual, our teams are on the lookout for solutions, breakout technologies, and more in the AI space.
Stay tuned.